欢迎阅读 Werkzeug 0.9.4 中文文档。
一般来说,如果你是刚接触Werkzeug或WSGI开发,你应该从这里开始。
Werkzeug 要求 Python 2.6 以上版本。如果你需要支持 Pyhthon <2.6 版本可以下载老 版本的 Werkzeug (强烈推荐 Pyhton 2.6 以上版本)。Werkzeug目前已经支持 Python 3 。更多信息请看 Python 3 Notes.
你可以安装最新的 Werkzeug 版本通过 easy_install:
easy_install Werkzeug
另外你也可以使用pip:
pip install Werkzeug
我们强烈推荐结合 virtualenv 使用这些工具。
这将会在 site-packages 目录安装一个 Werkzeug egg 包。
注意如果你没有安装 setuptools 执行最后一条命令将会自动下载和安装。这需要联 网。
以上命令会将 Werkzeug 安装到 site-packages 文件夹。
Virtualenv 大概会是你想在开发环境下使用的软件。如果你有shell权限访问生产环境, 你可能也会喜欢他。
virtualenv 解决了什么问题?如果你像我一样喜欢Python,你很可能会在基于 Werkzeug 的 Web 应用之外使用Python。但是随着项目越来越多,你使用不同版本python的可能性越 大,至少你有可能会用到支持不同Pytohn版本的库。我们不得不面对一种很常见的情况就 是库是不向后兼容的,或者很少有应用没有依赖包。所以当然有两个甚至更多项目的时候 你打算怎么解决依赖冲突?
Virtualenv 正是为此而生!它允许你安装多个Python版本, 每个项目对应自己的Python。 他其实并没有安装一个Python副本,而是通过很奇妙的方法来保持环境独立。
下面让我门看看 virtualenv 是怎么工作的!
如果你使用 Mac OS X 或 Linux, 这里有两种安装方法供你选择:
$ sudo easy_install virtualenv
或者更好的方法:
$ sudo pip install virtualenv
你可以通过上述命令在你的系统安装 virtualenv 。你甚至可以使用包管理器安装,如果 你使用Ubuntu,可以尝试:
$ sudo apt-get install python-virtualenv
如果你是用Windows,没有 easy_install 命令,你必须首先安装它。一旦安装成功, 执行相同的命令,但是不需要带 sudo 前缀。
一旦成功安装 virtualenv,打开 shell 创建你自己的环境。我经常会创建一个 myproje ct 文件夹,并在其中创建 env 文件夹:
$ mkdir myproject
$ cd myproject
$ virtualenv env
New python executable in env/bin/python
Installing setuptools............done.
现在,无论何时只要你想在某个项目上工作,只需激活相应环境。在 OS X 和 Linux,按 如下操作:
$ . env/bin/activate
(注意 . 和脚本名称之间的空格。 . 意味着这个脚本在当前shell下运行。如果这个命令 在你的命令行无效, 尝试用 source 代替 .)
如果你是个 Windows 用户,可以使用以下命令:
$ env\scripts\activate
无论哪种方式,现在你已经可以使用 virtualenv 了(观察shell中切换到的Virtualenv提 示)。
安装Werkzeug
$ pip install Werkzeug
几秒钟后你就可以使用werkzeug了。
Werkzeug 原本有一个神奇的导入系统钩子,如果启用它则可以从一个模块导入所有东西而且 还可以根据实际需要选择性加载。不幸的是,这种方法被证明是效率低下的,用它来代替Pyt hon实现和GAE是不可靠的。
从 0.7 开始我们不推荐短入口,强烈鼓励从一个实际实现的模块来导入。Werkzeug 1.0 将完 全不支持这种神奇的导入钩子。
因为手动去发现那么实际的函数被导入并重写他们是一个痛苦和乏味的过程,所以我们写 了一个工具来帮助过渡。
举个例子, Werkzeug < 0.7 版本推荐的方法是使用 escape 函数,用法如下:
from werkzeug import escape
Werkzeug 0.7 版本推荐的方法是直接从工具包导入 escape 函数(1.0 版本这个方 法将会变成强制性的)。为了自动重写所有的入口你可以使用 werkzeug-import-rewri te script。
你可以通过 Python 和 Werkzeug 基础代码的文件夹列表来执行它。它将会输出一个 hg/git 兼容的补丁文件。如下:
$ python werkzeug-import-rewrite.py . > new-imports.udiff
通过下列方法应用补丁文件:
hg:
hg import new-imports.udiff
git:
git apply new-imports.udiff
patch:
patch -p1 < new-imports.udiff
Werkzeug 上的一些东西将停止更新,我们强烈建议替换掉即使他们短时间内还可以使用。
不要使用:
欢迎来到 Werkzeug 教程,我们将会实现一个类似 TinyURL 的网站来储存 URLS。我们 将会使用的库有模板引擎 Jinja 2,数据层支持 redis ,当然还有 WSGI 协议层 Werkzeug。
你可以使用 pip 来安装依赖库:
pip install Jinja2 redis
同时确定你的本地开启一个 redis 服务,如果你是OS X系统,你可以使用 brew 来安装 redis:
brew install redis
如果你是用 Ubuntu 或 Debian, 你可以使用 apt-get:
sudo apt-get install redis
Redis 专为 UNIX 系统开发,并没有考虑为 Windows 设计。但对于开发来说,非官方的版本已 经足够了,你可以从 github 得到它。
在这个教程中,我们将一起用 Werkzeug 创建一个短网址服务。请注意,Werkzeug 并不是 一个框架,它是一个 WSGI 工具集的库,你可以通过它来创建你自己的框架或 Web 应用。 Werkzeug 是非常灵活的,这篇教程用到的一些方法只是 Werkzeug 的一部分。
在数据层,为了保持简单,我们使用 redis 来代替关系型数据库,而且 redis 也擅 长来做这些。
最终的结果将会看起来像这样:
Werkzeug 是一个 WSGI 工具包。WSGI 是一个 Web 应用和服务器通信的协议,Web 应用 可以通过 WSGI 一起工作。
一个基本的 “Hello World” WSGI 应用看起来是这样的:
def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello World!']
用过 WSGI 应用可以和环境通信,他有一个可调用的 start_response 。环境包含了 所有进来的信息。 start_response 用来表明已经收到一个响应。通过 Werkzeug 你 可以不必直接处理请求或者响应这些底层的东西,它已经为你封装好了这些。
请求数据需要环境对象,Werkzeug 允许你以一个轻松的方式访问数据。响应对象是一个 WSGI 应用,提供了更好的方法来创建响应。
下面教你怎么用响应对象来写一个应用:
from werkzeug.wrappers import Response
def application(environ, start_response):
response = Response('Hello World!', mimetype='text/plain')
return response(environ, start_response)
这里有一个在 URL 中查询字符串的扩展版本(重点是 URL 中的 name 将会替代 World):
from werkzeug.wrappers import Request, Response
def applicatio n(environ, start_response):
request = Request(environ)
text = 'Hello %s!' % request.args.get('name', 'World')
response = Response(text, mimetype='text/plain')
return response(environ, start_response)
到此为止,你已经足够了解 WSGI 了。
在开始之前,首先为应用创建一个目录:
/shortly
/static
/templates
这个简洁的目录不是一个python包,他用来存放我们的项目文件。我们的入口模块将会放在 /shortly 目录的根目录下。 /static 目录用来放置CSS、Javascript等静态文件,用户可以通过 HTTP协议直接访问。 /templates 目录用来存放 Jinja2 模板文件,接下来你为项目创 建的模板文件将要放到这个文件夹内。
现在我们正式开始为我们的项目创建模块。在 shortly 目录创建 shortly.py 文件。首先 来导入一些东西。为了防止混淆,我把所有的入口放在这,即使他们不会立即使用:
import os
import redis
import urlparse
from werkzeug.wrappers import Request, Response
from werkzeug.routing import Map, Rule
from werkzeug.exceptions import HTTPException, NotFound
from werkzeug.wsgi import SharedDataMiddleware
from werkzeug.utils import redirect
from jinja2 import Environment, FileSystemLoader
接下来我们来为我们的应用创建基本的结构,并通过一个函数来创建应用实例,通过 WSGI 中间件输出 static 目录的文件:
class Shortly(object):
def __init__(self, config):
self.redis = redis.Redis(config['redis_host'], config['redis_port'])
def dispatch_request(self, request):
return Response('Hello World!')
def wsgi_app(self, environ, start_response):
request = Request(environ)
response = self.dispatch_request(request)
return response(environ, start_response)
def __call__(self, environ, start_response):
return self. wsgi_app(environ, start_response)
def create_app(redis_host='localhost', redis_port=6379, with_static=True):
app = Shortly({
'redis_host': redis_host,
'redis_port': redis_port
})
if with_static:
app.wsgi_app = SharedDataMiddleware(app.wsgi_app, {
'/static': os.path.join(os.path.dirname(__file__), 'static')
})
return app
最后我们添加一部分代码来开启一个本地服务器,自动加载代码并开启调试器:
if __name__ == '__main__':
from werkzeug.serving import run_simple
app = create_app()
run_simple('127.0.0.1', 5000, app, use_debugger=True, use_reloader=True)
思路很简单,我们的 Shortly 是一个实际的 WSGI 应用。 __call__ 方法直接调 用 wsgi_app 。这样做我们可以装饰 wsgi_app 调用中间件,就像我们在 create_app 函数中做的一样。 wsgi_app 实际上创建了一个 Request 对象,之后通过 dispatch_request 调用 Request 对象然后给 WSGI 应用返回一个 Response 对象。正如你看到的:无论是创建 Shortly 类,还是还是创建 Werkzeug Request 对 象来执行 WSGI 接口。最终结果只是从 dispatch_request 方法返回另一个 WSGI 应用。
create_app 可以被用于创建一个新的应用实例。他不仅可以通过参数配置应用,还可 以选择性的添加中间件来输出静态文件。通过这种方法我们甚至可以不配置服务器就能访问 静态文件,这对开发是很有帮助的。
现在你应该可以通过 python 执行这个文件了,看看你本机的服务:
$ python shortly.py
* Running on http://127.0.0.1:5000/
* Restarting with reloader: stat() polling
它告诉你自动加载已经开启,他会通过各种各样的技术来判断硬盘上的文件是否改变来自动 重启。
在浏览器输入这个URL,你将会看到 “Hello World!”。
现在我们已经有了一个应用的基本类,可以通过构造函数来实现一些功能。通过构造函数我 们可以渲染模板、连接redis。现在让我们扩展这个类:
def __init__(self, config):
self.redis = redis.Redis(config['redis_host'], config['redis_port'])
template_path = os.path.join(os.path.dirname(__file__), 'templates')
self.jinja_env = Environment(loader=FileSystemLoader(template_path),
autoescape=True)
def render_template(self, template_name, **context):
t = self.jinja_env.get_template(template_name)
return Response(t.render(context), mimetype='text/html')
下一步是路由。我们可以通过路由来匹配和解析URL。Werkzeug 提供了一个灵活的集成路由。 你需要创建一个 Map 实例并添加一系列 Rule 对象。每个 rule 将会匹配 URL 并添加一个 “endpoint”。endpoint 通常是一个用于标记 URL 的字符串。此外我们还可以使用它来翻转 URL,但这不是这篇教程我们要做的。
把下列代码放入构造函数:
self.url_map = Map([
Rule('/', endpoint='new_url'),
Rule('/<short_id>', endpoint='follow_short_link'),
Rule('/<short_id>+', endpoint='short_link_details')
])
现在我们创造了一个包含三个 URL 规则的字典。第一个规则, / 是根 URL 空间,我 们可以调用一个逻辑函数来创建一个新 URL;第二个规则,根据规则指向一个目标URL;最 后一个规则,和第二个有相同的规则,但是它在最后添加一个(+)来显示链接链接详细 信息。
那么 endpoint 是怎么指向一个函数的?这是需要你解决的。本篇教程中是通过类中 on_ + endpoint 方法。具体如下:
def dispatch_request(self, request):
adapter = self.url_map.bind_to_environ(request.environ)
try:
endpoint, values = adapter.match()
return getattr(self, 'on_' + endpoint)(request, **values)
except HTTPException, e:
return e
我们将 RUL 绑定到目前的环境返回一个 URLAdapter 。适配器 可以用于匹配请求也可以翻转 URLS。匹配方法将会返回 endpoint 和一个 URL 值字典。这个 follow_short_link 路由实例有一个变量 short_id 。当我们在浏览器输入 http://localhost:5000/foo 我们将会得到如下的值:
endpoint = 'follow_short_link'
values = {'short_id': u'foo'}
我们没有匹配到任何东西,他将会抛出一个 NotFound 异常, 实质是一个 HTTPException 异常。所有的 HTTP 异常将会跳 转 WSGI 应用渲染的默认错误页面。所以我们只需要捕获并返回他们。
如果一切顺利,我们用 request 作为参数,所有的 URL 参数做作为关键字参数调用 on_ + endpoint 函数可以返回响应对象。
让我们开始第一个视图: new URLs 视图:
def on_new_url(self, request):
error = None
url = ''
if request.method == 'POST':
url = request.form['url']
if not is_valid_url(url):
error = 'Please enter a valid URL'
else:
short_id = self.insert_url(url)
return redirect('/%s+' % short_id)
return self.render_template('new_url.html', error=error, url=url)
思想不难理解。首先我们检查请求方法是不是 POST,然后验证得到的 URL 并插入到数据库 中,然后跳转到一个详细页面。要实现这个,意味着我们需要在写一个函数和一个辅助方法 下面是 URL 验证函数:
def is_valid_url(url):
parts = urlparse.urlparse(url)
return parts.scheme in ('http', 'https')
为了向数据库插入 URL,我们只需要在类中添加以下方法:
def insert_url(self, url):
short_id = self.redis.get('reverse-url:' + url)
if short_id is not None:
return short_id
url_num = self.redis.incr('last-url-id')
short_id = base36_encode(url_num)
self.redis.set('url-target:' + short_id, url)
self.redis.set('reverse-url:' + url, short_id)
return short_id
reverse-url: + URL 将会存放储存ID。如果 URL 已经被提交过那么只需要返回存储ID 值,否则我们增加 last-url-id 键值并转化为 base36,接下来我们将存储连接和转换 连接存储到 redis。下面就是转化为 base 36 的函数:
def base36_encode(number):
assert number >= 0, 'positive integer required'
if number == 0:
return '0'
base36 = []
while number != 0:
number, i = divmod(number, 36)
base36.append('0123456789abcdefghijklmnopqrstuvwxyz'[i])
return ''.join(reversed(base36))
然而我们还没有视图的模板,不急,我们过一会就来写模板。不过在这之前,我们先来完成 另一个视图。
重定向视图很简单,它只需要从 redis 找到连接并重定向跳转到它。另外我们还想添加一个 计数器以便于统计连接被点击频率:
def on_follow_short_link(self, request, short_id):
link_target = self.redis.get('url-target:' + short_id)
if link_target is None:
raise NotFound()
self.redis.incr('click-count:' + short_id)
return redirect(link_ta rget)
在这种情况下,如果 URL 不存在,我们将会抛出一个 NotFound 异常,通过 dispatch_request 函数返回一个 404 响应
链接描述视图也是非常相似的,我们仅仅需要再渲染一个模板。除了目标 URL,我们还需要 从 redis 查询被点击次数,如果在 redis 中没有记录,我们把它设为 0:
def on_short_link_details(self, request, short_id):
link_target = self.redis.get('url-target:' + short_id)
if link_target is None:
raise NotFound()
click_count = int(self.redis.get('click-count:' + short_id) or 0)
return self.render_template('short_link_details.html',
link_target=link_target,
short_id=short_id,
click_count=click_count
)
要知道 redis 存的是字符串,所以你需要手动点击次数转化为 :int 。
这里就是全部的模板,仅仅把它们放到 templates 文件夹就可以了。jinja2支持模板继 承,所以我们首先要创建一个 layout 模板,并用 blocks 占位。接下来设置jinja2以便于 自动用html规则转化字符串,我们不必自己花时间来做这些。同时它可以也防止 XSS 攻击和 渲染错误页面。
layout.html:
<!doctype html>
<title>{% block title %}{% endblock %} | shortly</title>
<link rel=stylesheet href=/static/style.css type=text/css>
<div class=box>
<h1><a href=/>shortly</a></h1>
<p class=tagline>Shortly is a URL shortener written with Werkzeug
{% block body %}{% endblock %}
</div>
new_url.html:
{% extends "layout.html" %}
{% block title %}Create New Short URL{% endblock %}
{% block body %}
<h2>Submit URL</h2>
<form action="" method=post>
{% if error %}
<p class=error><strong>Error:</strong> {{ error }}
{% endif %}
<p>URL:
<input type=text name=url value="{{ url }}" class=urlinput>
<input type=submit value="Shorten">
</form>
{% endblock %}
short_link_details.html:
{% extends "layout.html" %}
{% block title %}Details about /{{ short_id }}{% endblock %}
{% block body %}
<h2><a href="/{{ short_id }}">/{{ short_id }}</a></h2>
<dl>
<dt>Full link
<dd class=link><div>{{ link_target }}</div>
<dt>Click count:
<dd>{{ click_count }}
</dl>
{% endblock %}
添加样式可以使页面比丑陋的黑色和白色看起来好一些。下面是一个简单的样式表:
body { background: #E8EFF0; margin: 0; padding: 0; }
body, input { font-family: 'Helvetica Neue', Arial,
sans-serif; font-weight: 300; font-size: 18px; }
.box { width: 500px; margin: 60px auto; padding: 20px;
background: white; box-shadow: 0 1px 4px #BED1D4;
border-radius: 2px; }
a { color: #11557C; }
h1, h2 { margin: 0; color: #11557C; }
h1 a { text-decoration: none; }
h2 { font-weight: normal; font-size: 24px; }
.tagline { color: #888; font-style: italic; margin: 0 0 20px 0; }
.link div { overflow: auto; font-size: 0.8em; white-space: pre;
padding: 4px 10px; margin: 5px 0; background: #E5EAF1; }
dt { font-weight: normal; }
.error { background: #E8EFF0; padding: 3px 8px; color: #11557C;
font-size: 0.9em; border-radius: 2px; }
.urlinput { width: 300px; }
查看 Werkzeug 仓库的 example 目录可以找到这篇教程代码,那里的版本可能有一些改进, 比如一个定制的 404 页面。
Werkzeug 的设计意图是一个实用的工具集而不是一个框架。得益于从低级API 中分离出来 的面向用户友好的 API,Werkzeug 可以很简单的扩展另一个系统。
Request 和 Response 对象(又名”wrappers”) 提供的函数也可以来实 现一个小的功能。
这个例子实现一个小的 Hello World 应用。显示用户输入的名字:
from werkzeug.utils import escape
from werkzeug.wrappers import Request, Response
@Request.application
def hello_world(request):
result = ['<title>Greeter</title>']
if request.method == 'POST':
result.append('<h1>Hello %s!</h1>' % escape(request.form['name']))
result.append('''
<form action="" method="post">
<p>Name: <input type="text" name="name" size="20">
<input type="submit" value="Greet me">
</form>
''')
return Response(''.join(result), mimet ype='text/html')
另外不用 request 和 response 对象也可以实现这个功能,那就是借助 werkzeug 提供的 解析函数:
from werkzeug.formparser import parse_form_data
from werkzeug.utils import escape
def hello_world(environ, start_response):
result = ['<title>Greeter</title>']
if environ['REQUEST_METHOD'] == 'POST':
form = parse_form_data(environ)[1]
result.append('<h1>Hello %s!</h1>' % escape(form['name']))
result.append('''
<form action="" method="post">
<p>Name: <input type="text" name="name" size="20">
<input type="submit" value="Greet me">
</form>
''')
start_response('200 OK', [('Content-Type', 'text/html; charset=utf-8')])
return [''.join(result)]
通常我们更倾向于使用高级的 API(request 和 response 对象)。但是也有些情况你可能更 想使用低级功能。
例如你想在不破坏 Django 或者其他框架的代码的情况下解析 HTTP 头信息。这时你可以利 用 Werkzeug 调用低级 API 来解析 HTTP 头部。
再比如,如果你想写一个 web 框架,或者做单元测试,或者 用 WSGI 中间件将一个老的 CGI/mod_python 应用改成 WSGI 应用,并保证开销。那么你可能更希望使用较低级的 API。
文档的这部分内容将会向你展示如何使用 Werkzeug 最重要的部分。意在让开发者对 PEP 333 (WSGI) 和 RFC 2616 (HTTP) 有一个基本的了解。
警告
确保在文档建议的地方导入所有对象。理论上从不同的地方导入对象是可行的,但是在 这却是不被支持的。
例如 MultiDict 是一个 werkzeug 模块,但它在内部却不是 Werkzeug 实现的。
WSGI 环境包含所有用户向应用发送信息。你可以通过它向 WSGI 发送信息,但是你也可以 使用 create_environ() 辅助函数创建一个 WSGI 环境字典:
>>> from werkzeug.test import create_environ
>>> environ = create_environ('/foo', 'http://localhost:8080/')
现在我们创造了一个环境:
>>> environ['PATH_INFO']
'/foo'
>>> environ['SCRIPT_NAME']
''
>>> environ['SERVER_NAME']
'localhost'
通常没人愿意直接使用 environ 因为它对字节串是有限制的,而且不提供访问表单数据的 方法除非手动解析数据。
Request 对象访问请求数据是很有趣的。它封装 environ 并提供只读的方法访 问数据:
>>> from werkzeug.wrappers import Request
>>> request = Request(environ)
现在你可以访问重要的变量,Werkzeug 将会帮你解析并解码他们。默认的字符集是 utf-8 但是你可以通过 Request 子类更改。
>>> request.path
u'/foo'
>>> request.script_root
u''
>>> request.host
'localhost:8080'
>>> request.url
'http://localhost:8080/foo'
我们也可以得到 HTTP 请求方法:
>>> request.method
'GET'
通过这个方法我们可以访问 URL 参数(查询的字符串) 和 POST/PUT 请求提交的数据。
为了测试,我们通过 from_values() 方法得到的数据创建一个请求对象:
>>> from cStringIO import StringIO
>>> data = "name=this+is+encoded+form+data&another_key=another+one"
>>> request = Request.from_values(query_string='foo=bar&blah=blafasel',
... content_length=len(data), input_stream=StringIO(data),
... content_type='application/x-www-form-urlencoded',
... method='POST')
...
>>> request.method
'POST'
我们可以很容易访问 URL 参数:
>>> request.args.keys()
['blah', 'foo']
>>> request.args['blah']
u'blafasel'
访问提交的数据也是一样的:
>>> request.form['name']
u'this is encoded form data'
处理上传文件不再困难正如下例:
def store_file(request):
file = request.files.get('my_file')
if file:
file.save('/where/to/store/the/file.txt')
else:
handle_the_error()
files 代表一个 FileStorage 对象,提供一些常见的操作。
通过 headers 的属性可以得到请求的 headers。
>>> request.headers['Content-Length']
'54'
>>> request.headers['Content-Type']
'application/x-www-form-urlencoded'
头信息的键不区分大小写。
这里还有更多 Werkzeug 提供的使用 HTTP headers 和其他请求数据的常用的方法。
让我们用典型的 web 浏览器发送数据来创建一个请求对象。以便于更真实的测试:
>>> environ = create_environ()
>>> environ.update(
... HTTP_USER_AGENT='Mozilla/5.0 (Macintosh; U; Mac OS X 10.5; en-US; ) Firefox/3.1',
... HTTP_ACCEPT='text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
... HTTP_ACCEPT_LANGUAGE='de-at,en-us;q=0.8,en;q=0.5',
... HTTP_ACCEPT_ENCODING='gzip,deflate',
... HTTP_ACCEPT_CHARSET='ISO-8859-1,utf-8;q=0.7,*;q=0.7',
... HTTP_IF_MODIFIED_SINCE='Fri, 20 Feb 2009 10:10:25 GMT',
... HTTP_IF_NONE_MATCH='"e51c9-1e5d-46356dc86c640"',
... HTTP_CACHE_CONTROL='max-age=0'
... )
...
>>> request = Request(environ)
让我们从最没有用(- -)的 headers 开始: the user agent:
>>> request.user_agent.browser
'firefox'
>>> request.user_agent.platform
'macos'
>>> request.user_agent.version
'3.1'
>>> request.user_agent.language
'en-US'
一个更有用的 headers 是 Accept header。这个 header 将会告诉 web 应用可以处理并怎么处理 MIME类型,所有 accept header 被严格分类,最重要的是第一条:
>>> request.accept_mimetypes.best
'text/html'
>>> 'application/xhtml+xml' in request.accept_mimetypes
True
>>> print request.accept_mimetypes["application/json"]
0.8
可使用的语言也是一样:
>>> request.accept_languages.best
'de-at'
>>> request.accept_languages.values()
['de-at', 'en-us', 'en']
当然还有编码和字符集:
>>> 'gzip' in request.accept_encodings
True
>>> request.accept_charsets.best
'ISO-8859-1'
>>> 'utf-8' in request.accept_charsets
True
标准化是可行的,所以你可以安全的使用不同形式来执行控制检查:
>>> 'UTF8' in request.accept_charsets
True
>>> 'de_AT' in request.accept_languages
True
E-tags 和其他条件 header 也可以被解析:
>>> request.if_modified_since
datetime.datetime(2009, 2, 20, 10, 10, 25)
>>> request.if_none_match
<ETags '"e51c9-1e5d-46356dc86c640"'>
>>> request.cache_control
<RequestCacheControl 'max-age=0'>
>>> request.cache_control.max_age
0
>>> 'e51c9-1e5d-46356dc86c640' in request.if_none_match
True
Response 对象和请求对象相对。他常用于向客户端发送响应数据。实际上,在 WSGI 应用 中没有什么比 Response 对象更重要了。
那么你要做的不是从一个 WSGI 应用中返回 returning 响应对象,而是在 WSGI 应用内 部调用一个 WSGI 应用并返回调用的值。
想象一个标准的 “Hello World” WSGI 应用:
def application(environ, start_res ponse):
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello World!']
带着一个响应对象的将会是这样的:
from werkzeug.wrappers import Response
def application(environ, s tart_response):
response = Response('Hello World!')
return response(environ, start_response)
同时,不同与请求对象,响应对象被设计为可修改的。所以你还可以进行如下操作:
>>> from werkzeug.wrappers import Response
>>> response = Response("Hello World!")
>>> response.headers['content-type']
'text/plain; charset=utf-8'
>>> response.data
'Hello World!'
>>> response.headers['content-length'] = len(response.data)
你可以用同样的方式修改响应状态,或者仅仅一个状态吗、一条信息:
>>> response.status
'200 OK'
>>> response.status = '404 Not Found'
>>> response.status_code
404
>>> response.status_code = 400
>>> response.status
'400 BAD REQUEST'
正如你看到的,状态属性是双向的,你可以同时看到 status 和 status_code ,他们相互对应的。
同时常见的 headers 是公开的,可以作为属性访问或者用方法设置/获取他们:
>>> response.content_length
12
>>> from datetime import datetime
>>> response.date = datetime(2009, 2, 20, 17, 42, 51)
>>> response.headers['Date']
'Fri, 20 Feb 2009 17:42:51 GMT'
因为 etags 可以使 weak 或者 strong,所以这里有方法可以设置它:
>>> response.set_etag("12345-abcd")
>>> response.headers['etag']
'"12345-abcd"'
>>> response.get_etag()
('12345-abcd', False)
>>> response.set_etag("12345-abcd", weak=True)
>>> response.get_etag()
('12345-abcd', True)
一些有用的 headers 是可变的结构,比如 Content- header 是一个值的集合:
>>> response.content_language.add('en-us')
>>> response.content_language.add('en')
>>> response.headers['Content-Language']
'en-us, en'
下面的 header 值同样不是单一的:
>>> response.headers['Content-Language'] = 'de-AT, de'
>>> response.content_language
HeaderSet(['de-AT', 'de'])
认证 header 也可以这样设置:
>>> response.www_authenticate.set_basic("My protected resource")
>>> response.headers['www-authenticate']
'Basic realm="My protected resource"'
Cookie 同样可以被设置:
>>> response.set_cookie('name', 'value')
>>> response.headers['Set-Cookie']
'name=value; Path=/'
>>> response.set_cookie('name2', 'value2')
如果头出现多次,你可以使用 getlist() 方法来获取一个 header 的所有值:
>>> response.headers.getlist('Set-Cookie')
['name=value; Path=/', 'name2=value2; Path=/']
最后如果你已经设置了所有条件值,那么你可以根据一个请求作出响应。这意味着,如果 一个请求可以确定已经有了一个信息,只发送一个 header 是很节省流量的。尽管如此,你仍 然应该至少设置一个 etag (用于比较) 和可以被请求对象的 make_conditional 处理的 header 。
因此,响应是被改进的 (比如状态码改变,移除响应主题,删除实体报头等)。
这部分文档特别要求使用 Werkzeug 和 WSGI 的环境为 Python 3。
警告
Werkzeug 的 Python 3 支持目前只是实验性的。所以有问题欢迎反馈以帮助我们来 改善它。
Python 3 的 WSGI 环境和 Python 2 有一点不同。如果你使用高级的 API,Werkzeug 会帮你隐藏这些区别的大部分。Python 2 和 Pyhton 3 最主要的区别是 Python 2 的 WSGI 环境包含字节,而 Python 3 包含一系列不同的编码字符串。
在 Python 3 有两种不同类型的 WSGI 环境:
Werkzeug 给你提供一些函数自动解决这些问题。所以你不需要关心内部的实现。下面 的函数和类可以用来读取 WSGI 环境信息:
不推荐在 Python 3 中创造和修改 WSGI 环境除非确保能够正确解码。在 Werkzeug 中 所有高级 API 接口能正确实现编码和解码。
在 Python 3 中 Werkzeug 的 URL 为 unicode 字符串。所有的解析函数一般会提供操 作字节码功能。在某些情况,URLs 处理函数允许字符集不改变返回一个字节对象。在 内部 Werkzeug 正尽可能统一 URIs 和 IRIs。
Python 3 和 PyPy 在上传文件时,需要确保关闭 Request 对象。这要妥善关闭由多重 解析创建的临时文件。你可以使用 close() 方法。
除了请求对象还有上下文管理需要关闭,但是上下文管理可以自动关闭。
支持开发环境服务器和测试,管理脚本工具集包括如下:
这里有一些在服务器运行 WSGI 应用的方式。当你正在开发一个应用,你往往不想在一个成 熟服务器上部署和运行,取而代之的是一个轻量服务器。 Werkzeug 就内置了这样一个轻量 的服务器。
在一个服务器上运行 start-myproject.py 最简单的方法如下示例:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from werkzeug.serving import run_simple
from myproject import make_app
app = make_app(...)
run_simple('localhost', 8080, app, use_reloader=True)
你可以添加一个 extra_files 关键字参数,一个你想要添加的文件(比如配置文件)列表。
用 wsgiref 带可选参数 reloader 运行一个应用,通过包裹 wsgiref 来改正多线程 WSGI 的默认的错误报告,添加可选的多线程,支持 fork。
这个函数也有一个命令行接口:
python -m werkzeug.serving --help
0.5 新版功能: 通过添加 static_files 简单支持静态文件和 passthrough_errors。
0.6 新版功能: 支持添加 SSL。
0.8 新版功能: 添加支持从 certificate 自动加载 SSL 上下文和私钥。 file and private key.
0.9 新版功能: 添加命令行接口。
参数: |
|
---|
创建一个 SSL 密钥。用于代替 'adhoc' 密钥将会在服务启动的时候创建一个 新的证书。他接受一个存放密钥、证书和主机或 CN 的路径。如果主机拥有这个将会 使用 CN *.host/CN=host。
更多信息请看 run_simple()。
0.9 新版功能.
参数: | base_path – 证书和密钥的路径。扩展名是 .crt 的文件被添加到证书,扩 |
---|
展名为 .key 的文件被添加到密钥。 :param host: 主机的名字。这个用于替代 cn。 :param cn: 使用 CN。
Information
开发服务器不是为了生产环境,它的出现是为了开发方便,在高负载情况下效率是很低 的。生产环境部署一个应用请看 Application Deployment 页面。
一些应用有多个子域名,你需要模拟本地。幸运的是 hosts file 文件可以给本机分配 多个名字。
这允许你使用 yourapplication.local 和 api.yourapplication.local (或者其他)代 替 localhost 访问本机。
你可以从下面的地方找到 hosts 文件:
Windows %SystemRoot%\system32\drivers\etc\hosts Linux / OS X /etc/hosts
你可以用你喜欢的文本编辑器打开 hosts 文件,在 localhost 后面加上:
127.0.0.1 localhost yourapplication.local api.yourapplication.local
保存之后你应该就可以通过你添加的主机名字访问开发服务器了。你可以使用 URL Routing 系统调度”两个”主机或自己解析 request.host 。
0.7 新版功能.
从 Werkzeug 0.7 版本开始,开发服务器允许在一个请求后关闭服务。目前要求你的Python 版本在 2.6 以上,同时也只能在开发服务器启用。通过在 WSGI 环境调用 'erkzeug.server.shutdown' 来开启 shutdown:
def shutdown_server(environ):
if not 'werkzeug.server.shutdown' in environ:
raise RuntimeError('Not running the development server')
environ['werkzeug.server.shutdown']()
在一些支持并配置 ipv6 的操作系统,比如 Linux, OS X 10.4 或更高 和 Windows Vista 一些浏览器有时候访问本地服务器很慢,原因有可能是本机被设置为同时支持 ipv4 和 ipv6 套接字,一些浏览器会首先尝试 ipv6 协议。
而目前集成的服务器不能同时支持两种协议。为了更好的可移植性,将会默认支持 ipv4 协议。
注意到解决这个问题有两种方法。如果你不需要ipv6 支持,你可以移除 hosts file 文件中的下面一行:
::1 localhost
另外你也可以关闭浏览器的 ipv6 支持。比如,在火狐浏览器中你可以进入 about:config 关闭 network.dns.disableIPv6 。然后,在 werkzeug 0.6.1中不推 荐这种做法。
从 Werkzeug 0.6.1 开始服务器将不再根据操作系统的配置来转换协议。这意味着如果你的 浏览器关闭 ipv6 支持,而你的操作系统更倾向于 ipv6,你将连接不上服务器。这种情况 下,你可以移除本机 hosts 文件的 ::1 或者明确的用一个 ipv4 协议地址 (127.0.0.1)绑定主机名。
0.6 新版功能.
内置服务器支持测试目的的 SSL,如果提供一个 SSL上下文,他将会被使用,这意味着服 务器可以在 HTTP 或 HTTPS 模式下运行,但不可同时在两种模式下运行。这个功能需要 Python OpenSSL 库。
在werkzeug开发过程中使用 SSL 最简单的方法就是通过 werkzeug 生成一个 SSL 证书和 私钥存起来。对于证书你需要提供生成证书的服务器名或一个 CN.
生成一个 SSL 密钥并存放在某个地方:
>>> from werkzeug.serving import make_ssl_devcert
>>> make_ssl_devcert('/path/to/the/key', host='localhost')
('/path/to/the/key.crt', '/path/to/the/key.key')
现在这个元组会当作 ssl_context 传入 run_simple() 方法:
‘/path/to/the/key.key’))
现在当你通过浏览器访问 web 应用的时候将需要验证证书。
你也可以通过代码创建一个上下文代替使用 ssl_context 元组。这个方法是更好控 制的:
from OpenSSL import SSL
ctx = SSL.Context(SSL.SSLv23_METHOD)
ctx.use_privatekey_file('ssl.key')
ctx.use_certificate_file('ssl.cert')
run_simple('localhost', 4000, application, ssl_context=ctx)
使用 openssl 工具代替 make_ssl_devcert() 预先创建一个证书。这要求你的系 统安装 openssl 命令行工具:
$ openssl genrsa 1024 > ssl.key
$ openssl req -new -x509 -nodes -sha1 -days 365 -key ssl.key > ssl.cert
开启 SSL 最简单的方法就是用 adhoc 模式运行服务。在这个例子中 Werkzeug 将会为你 创建一个证书:
run_simple('localhost', 4000, application,
ssl_context='adhoc')
当然这种方法也有缺点,那就是每次重载服务你都需要验证证书。不推荐 Adhoc 证书因 为现在的浏览器因为安全原因对他们支持并不好。
也许你经常需要对你的的应用进行单元测试或者仅仅检查 Python session 的输出。理论上 讲这是很简单的,你可以伪造一个环境,通过一个假的 start_response 遍历应用,但是 这里还有一个更好的方法。
Werkzeug 提供了一个 Client 对象,可以传入一个 WSGI 应用(可选传入一个 response), 通过这个你可以向应用发出一个虚拟请求。
用三个参数调用一个 response: 应用迭代器、状态和一个 headers。默认 response 返回 一个元组。因为 response 对象有相同的签名,所以你可以像使用 response 一样使用他们 。通过这样一种方式进行测试功能是很理想的。
>>> from werkzeug.test import Client
>>> from werkzeug.testapp import test_app
>>> from werkzeug.wrappers import BaseResponse
>>> c = Client(test_app, BaseResponse)
>>> resp = c.get('/')
>>> resp.status_code
200
>>> resp.headers
Headers([('Content-Type', 'text/html; charset=utf-8'), ('Content-Length', '8339')])
>>> resp.data.splitlines()[0]
'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"'
或默认没有 response:
>>> c = Client(test_app)
>>> app_iter, status, headers = c.get('/')
>>> status
'200 OK'
>>> headers
[('Content-Type', 'text/html; charset=utf-8'), ('Content-Length', '8339')]
>>> ''.join(app_iter).splitlines()[0]
'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"'
0.5 新版功能.
交互测试应用 最简单的方法是使用 EnvironBuilder 类。它可以创建标准 WSGI 环境和请求对象。
下面的例子创建了一个上传文件和文件表单的 WSGI 环境:
>>> from werkzeug.test import EnvironBuilder
>>> from StringIO import StringIO
>>> builder = EnvironBuilder(method='POST', data={'foo': 'this is some text',
... 'file': (StringIO('my file contents'), 'test.txt')})
>>> env = builder.get_environ()
返回的环境是一个新的 WSGI 环境,可用于进一步的处理:
>>> from werkzeug.wrappers import Request
>>> req = Request(env)
>>> req.form['foo']
u'this is some text'
>>> req.files['file']
<FileStorage: u'test.txt' ('text/plain')>
>>> req.files['file'].read()
'my file contents'
当你将一个字典传给构造函数数据, EnvironBuilder 会自动自动找出内容类型。如 过你传的似乎一个字符串或者输入字符流,你不得不自己来做这些处理。
默认地它将会尝试使用 application/x-www-form-urlencoded ,如果文件被上传则只 使用 multipart/form-data :
>>> builder = EnvironBuilder(method='POST', data={'foo': 'bar'})
>>> builder.content_type
'application/x-www-form-urlencoded'
>>> builder.files['foo'] = StringIO('contents')
>>> builder.content_type
'multipart/form-data'
如果传入一个字符串(或一个输入流),你必须自己指定内容的类型:
>>> builder = EnvironBuilder(method='POST', data='{"json": "this is"}')
>>> builder.content_type
>>> builder.content_type = 'application/json'
这个类为了测试可以方便的创建一个 WSGI 环境。他可以从任意数据快速创建 WSGI 环境或请求对象。
这个类的签名也可用于 Werkzeug 的其他地方(create_environ(), BaseResponse.from_values(), Client.open())。因为大多数功能只可通 过构造函数实现。
文件和表格数据可以被各自的 form 和 files 属性独立处理。但是以 相同的参数传入构造函数:data。
data 可以是这些值:
0.6 新版功能: path 和 base_url 现在是 unicode 字符串,它可以使用 iri_to_uri() 函数编码。
参数: |
|
---|
应用的地址。(又叫 PATH_INFO)
编码 unicode 数据的字符集。
一个带着请求 headers的 Headers 对象。
用于 wsgi.errors 流的错误流。
wsgi.multithread 的值。
wsgi.multiprocess 的值。
新创建环境的基本字典。
用于覆盖生成环境的带值字典。
可选选项输入流。这个和 form / files 是相互独立的。同时如果 请求方法不是 POST / PUT 或其他类似方法,不要提供输入流。
URL 参数是 MultiDict。
base URL 是一个用于提取 WSGI URL ,主机(服务器名 + 服务器端口) 和根脚本 (SCRIPT_NAME) 的 URL
关闭所有文件。如果把 file 对象放入 files 字典,你可 以通过调用这个方法自动关闭他们。
返回内置环境。
返回一个带数据的请求。如果没有指定请求类,将会是用 request_class。
参数: | cls – 使用 request 包装。 |
---|
一个可选的输入流。如果你设置它,将会清空 form 和 files。
默认的请求类 get_request()。
BaseRequest 的别名
服务器名 (只读, 使用 host 设置)
整型服务器接口(只读,使用 host 设置)
服务器使用协议。默认为 HTTP/1.1
使用的 WSGI 版本。默认为(1, 0)。
这个类允许你发送请求给一个包裹的应用。
响应可以是一个类或者一个有三个参数工厂函数: app_iter, status and headers。默 认的响应仅仅是一个元组。
例如:
class ClientResponse(BaseResponse):
...
client = Client(MyApplication(), response_wrapper=ClientResponse)
use_cookies 参数默认是开启的,无论 cookies 是否被存储,他都会和请求一起传输。 但是你也可以关闭 cookie。
如果你想要请求应用的子域名,你可以设置 allow_subdomain_redirects 为 True , 如果为 False ,将不允许外部重定向。
0.5 新版功能: use_cookies 是在这个版本添加的。老版本不提供内置 cookie 支持。
和 EnvironBuilder 一样的参数还有一些补充: 你可以提供一个 EnvironBuilder 类或一个 WSGI 环境代替 EnvironBuilder 类作为参数。同时有两个可选参数 (as_tuple, buffered),可以改变返回值 的类型或应用执行方法。
在 0.5 版更改: 如果为 data 参数提供一个带文件的字典,那么内容类型必须为 content_type 而不是 mimetype。这个改变是为了和 werkzeug.FileWrapper 保 持一致。
follow_redirects 参数被添加到 open().
Additional parameters:
参数: |
|
---|
和 open 相似,但是方法强制执行 GET。
和 open 相似,但是方法强制执行 POST。
和 open 相似,但是方法强制执行 PUT。
和 open 相似,但是方法强制执行 DELETE。
和 open 相似,但是方法强制执行 HEAD。
根据传入的值创建一个 WSGI 环境。第一个参数应该是请求的路径,默认为 ‘/’。 另一个参数或者是一个绝对路径(在这个例子中主机是 localhost:80)或请求的完整 路径,端口和脚本路径。
它和 EnvironBuilder 构造函数接受相同的参数。
在 0.5 版更改: 这个函数现在是一个 EnvironBuilder 包裹,在 0.5 版本被添加。需要 headers, environ_base, environ_overrides 和 charset 参数。
返回一个应用输出的元组形式 (app_iter, status, headers)。如果你通过应用 返回一个迭代器他将会工作的更好。
有时应用可以使用 start_ewsponse 返回的 write() 回调函数。这将会自动解 决边界情况。如果没有得到预期输出,你应该将 buffered 设为 True 执行 buffering
如果传入一个错误的应用,这个函数将会是未定义的。不要给这个函数传入一个不标准 的 WSGI 应用。
参数: |
|
---|---|
返回: | 元组形式 (app_iter, status, headers) |
Depending on the WSGI gateway/server, exceptions are handled differently. But most of the time, exceptions go to stderr or the error log.
Since this is not the best debugging environment, Werkzeug provides a WSGI middleware that renders nice debugging tracebacks, optionally with an AJAX based debugger (which allows to execute code in the context of the traceback’s frames).
The interactive debugger however does not work in forking environments which makes it nearly impossible to use on production servers. Also the debugger allows the execution of arbitrary code which makes it a major security risk and must never be used on production machines because of that.
You can enable the debugger by wrapping the application in a DebuggedApplication middleware. Additionally there are parameters to the run_simple() function to enable it because this is a common task during development.
Enables debugging support for a given application:
from werkzeug.debug import DebuggedApplication
from myapp import app
app = DebuggedApplication(app, evalex=True)
The evalex keyword argument allows evaluating expressions in a traceback’s frame context.
0.9 新版功能: The lodgeit_url parameter was deprecated.
参数: |
|
---|
Once enabled and an error happens during a request you will see a detailed traceback instead of a general “internal server error”. If you have the evalex feature enabled you can also get a traceback for every frame in the traceback by clicking on the console icon.
Once clicked a console opens where you can execute Python code in:
Inside the interactive consoles you can execute any kind of Python code. Unlike regular Python consoles the output of the object reprs is colored and stripped to a reasonable size by default. If the output is longer than what the console decides to display a small plus sign is added to the repr and a click will expand the repr.
To display all variables that are defined in the current frame you can use the dump() function. You can call it without arguments to get a detailed list of all variables and their values, or with an object as argument to get a detailed list of all the attributes it has.
If you click on the Traceback title the traceback switches over to a text based one. The text based one can be pasted to paste.pocoo.org with one click.
The request and response objects wrap the WSGI environment or the return value from a WSGI application so that it is another WSGI application (wraps a whole application).
Your WSGI application is always passed two arguments. The WSGI “environment” and the WSGI start_response function that is used to start the response phase. The Request class wraps the environ for easier access to request variables (form data, request headers etc.).
The Response on the other hand is a standard WSGI application that you can create. The simple hello world in Werkzeug looks like this:
from werkzeug.wrappers import Response
application = Response('Hello World!')
To make it more useful you can replace it with a function and do some processing:
from werkzeug.wrappers import Request, Response
def application(environ, start_response):
request = Request(environ)
response = Response("Hello %s!" % request.args.get('name', 'World!'))
return response(environ, start_response)
Because this is a very common task the Request object provides a helper for that. The above code can be rewritten like this:
from werkzeug.wrappers import Request, Response
@Request.application
def application(request):
return Response("Hello %s!" % request.args.get('name', 'World!'))
The application is still a valid WSGI application that accepts the environment and start_response callable.
The implementation of the Werkzeug request and response objects are trying to guard you from common pitfalls by disallowing certain things as much as possible. This serves two purposes: high performance and avoiding of pitfalls.
For the request object the following rules apply:
For the response object the following rules apply:
These objects implement a common set of operations. They are missing fancy addon functionality like user agent parsing or etag handling. These features are available by mixing in various mixin classes or using Request and Response.
Very basic request object. This does not implement advanced stuff like entity tag parsing or cache controls. The request object is created with the WSGI environment as first argument and will add itself to the WSGI environment as 'werkzeug.request' unless it’s created with populate_request set to False.
There are a couple of mixins available that add additional functionality to the request object, there is also a class called Request which subclasses BaseRequest and all the important mixins.
It’s a good idea to create a custom subclass of the BaseRequest and add missing functionality either via mixins or direct implementation. Here an example for such subclasses:
from werkzeug.wrappers import BaseRequest, ETagRequestMixin
class Request(BaseRequest, ETagRequestMixin):
pass
Request objects are read only. As of 0.5 modifications are not allowed in any place. Unlike the lower level parsing functions the request object will use immutable objects everywhere possible.
Per default the request object will assume all the text data is utf-8 encoded. Please refer to the unicode chapter for more details about customizing the behavior.
Per default the request object will be added to the WSGI environment as werkzeug.request to support the debugging system. If you don’t want that, set populate_request to False.
If shallow is True the environment is initialized as shallow object around the environ. Every operation that would modify the environ in any way (such as consuming form data) raises an exception unless the shallow attribute is explicitly set to False. This is useful for middlewares where you don’t want to consume the form data by accident. A shallow request is not populated to the WSGI environment.
在 0.5 版更改: read-only mode was enforced by using immutables classes for all data.
The WSGI environment that the request object uses for data retrival.
Called to get a stream for the file upload.
This must provide a file-like class with read(), readline() and seek() methods that is both writeable and readable.
The default implementation returns a temporary file if the total content length is higher than 500KB. Because many browsers do not provide a content length for the files only the total content length matters.
参数: |
|
---|
If a forwarded header exists this is a list of all ip addresses from the client ip to the last proxy server.
Decorate a function as responder that accepts the request as first argument. This works like the responder() decorator but the function is passed the request object as first argument and the request object will be closed automatically:
@Request.application
def my_wsgi_app(request):
return Response('Hello World!')
参数: | f – the WSGI callable to decorate |
---|---|
返回: | a new WSGI callable |
The parsed URL parameters. By default an ImmutableMultiDict is returned from this function. This can be changed by setting parameter_storage_class to a different type. This might be necessary if the order of the form data is important.
the charset for the request, defaults to utf-8
Closes associated resources of this request object. This closes all file handles explicitly. You can also use the request object in a with statement with will automatically close it.
0.9 新版功能.
Read only access to the retrieved cookie values as dictionary.
the type to be used for dict values from the incoming WSGI environment. By default an ImmutableTypeConversionDict is used (for example for cookies).
0.6 新版功能.
ImmutableTypeConversionDict 的别名
Indicates weather the data descriptor should be allowed to read and buffer up the input stream. By default it’s enabled.
0.9 新版功能.
the error handling procedure for errors, defaults to ‘replace’
MultiDict object containing all uploaded files. Each key in files is the name from the <input type="file" name="">. Each value in files is a Werkzeug FileStorage object.
Note that files will only contain data if the request method was POST, PUT or PATCH and the <form> that posted to the request had enctype="multipart/form-data". It will be empty otherwise.
See the MultiDict / FileStorage documentation for more details about the used data structure.
The form parameters. By default an ImmutableMultiDict is returned from this function. This can be changed by setting parameter_storage_class to a different type. This might be necessary if the order of the form data is important.
The form data parser that shoud be used. Can be replaced to customize the form date parsing.
FormDataParser 的别名
Create a new request object based on the values provided. If environ is given missing values are filled from there. This method is useful for small scripts when you need to simulate a request from an URL. Do not use this method for unittesting, there is a full featured client object (Client) that allows to create multipart requests, support for cookies etc.
This accepts the same options as the EnvironBuilder.
在 0.5 版更改: This method now accepts the same arguments as EnvironBuilder. Because of this the environ parameter is now called environ_overrides.
返回: | request object |
---|
Requested path as unicode, including the query string.
This reads the buffered incoming data from the client into one bytestring. By default this is cached but that behavior can be changed by setting cache to False.
Usually it’s a bad idea to call this method without checking the content length first as a client could send dozens of megabytes or more to cause memory problems on the server.
Note that if the form data was already parsed this method will not return anything as form data parsing does not cache the data like this method does. To implicitly invoke form data parsing function set parse_form_data to True. When this is done the return value of this method will be an empty string if the form parser handles the data. This generally is not necessary as if the whole data is cached (which is the default) the form parser will used the cached data to parse the form data. Please be generally aware of checking the content length first in any case before calling this method to avoid exhausting server memory.
If as_text is set to True the return value will be a decoded unicode string.
0.9 新版功能.
The headers from the WSGI environ as immutable EnvironHeaders.
Just the host including the port if available.
Just the host with scheme.
boolean that is True if the application is served by a WSGI server that spawns multiple processes.
boolean that is True if the application is served by a multithreaded WSGI server.
boolean that is True if the application will be executed only once in a process lifetime. This is the case for CGI for example, but it’s not guaranteed that the exeuction only happens one time.
True if the request is secure.
True if the request was triggered via a JavaScript XMLHttpRequest. This only works with libraries that support the X-Requested-With header and set it to “XMLHttpRequest”. Libraries that do that are prototype, jQuery and Mochikit and probably some more.
the type to be used for list values from the incoming WSGI environment. By default an ImmutableList is used (for example for access_list).
0.6 新版功能.
ImmutableList 的别名
Creates the form data parser. Instanciates the form_data_parser_class with some parameters.
0.8 新版功能.
the maximum content length. This is forwarded to the form data parsing function (parse_form_data()). When set and the form or files attribute is accessed and the parsing fails because more than the specified value is transmitted a RequestEntityTooLarge exception is raised.
Have a look at Dealing with Request Data for more details.
0.5 新版功能.
the maximum form field size. This is forwarded to the form data parsing function (parse_form_data()). When set and the form or files attribute is accessed and the data in memory for post data is longer than the specified value a RequestEntityTooLarge exception is raised.
Have a look at Dealing with Request Data for more details.
0.5 新版功能.
The transmission method. (For example 'GET' or 'POST').
the class to use for args and form. The default is an ImmutableMultiDict which supports multiple values per key. alternatively it makes sense to use an ImmutableOrderedMultiDict which preserves order or a ImmutableDict which is the fastest but only remembers the last key. It is also possible to use mutable structures, but this is not recommended.
0.6 新版功能.
ImmutableMultiDict 的别名
Requested path as unicode. This works a bit like the regular path info in the WSGI environment but will always include a leading slash, even if the URL root is accessed.
The URL parameters as raw bytestring.
The remote address of the client.
If the server supports user authentication, and the script is protected, this attribute contains the username the user has authenticated as.
URL scheme (http or https).
0.7 新版功能.
The root path of the script without the trailing slash.
The stream to read incoming data from. Unlike input_stream this stream is properly guarded that you can’t accidentally read past the length of the input. Werkzeug will internally always refer to this stream to read data which makes it possible to wrap this object with a stream that does filtering.
在 0.9 版更改: This stream is now always available but might be consumed by the form parser later on. Previously the stream was only set if no parsing happened.
Optionally a list of hosts that is trusted by this request. By default all hosts are trusted which means that whatever the client sends the host is will be accepted. This is the recommended setup as a webserver should manually be set up to not route invalid hosts to the application.
0.9 新版功能.
The reconstructed current URL
The full URL root (with hostname), this is the application root.
Returns True if the request method carries content. As of Werkzeug 0.9 this will be the case if a content type is transmitted.
0.8 新版功能.
Base response class. The most important fact about a response object is that it’s a regular WSGI application. It’s initialized with a couple of response parameters (headers, body, status code etc.) and will start a valid WSGI response when called with the environ and start response callable.
Because it’s a WSGI application itself processing usually ends before the actual response is sent to the server. This helps debugging systems because they can catch all the exceptions before responses are started.
Here a small example WSGI application that takes advantage of the response objects:
from werkzeug.wrappers import BaseResponse as Response
def index():
return Response('Index page')
def application(environ, start_response):
path = environ.get('PATH_INFO') or '/'
if path == '/':
response = index()
else:
response = Response('Not Found', status=404)
return response(environ, start_response)
Like BaseRequest which object is lacking a lot of functionality implemented in mixins. This gives you a better control about the actual API of your response objects, so you can create subclasses and add custom functionality. A full featured response object is available as Response which implements a couple of useful mixins.
To enforce a new type of already existing responses you can use the force_type() method. This is useful if you’re working with different subclasses of response objects and you want to post process them with a know interface.
Per default the request object will assume all the text data is utf-8 encoded. Please refer to the unicode chapter for more details about customizing the behavior.
Response can be any kind of iterable or string. If it’s a string it’s considered being an iterable with one item which is the string passed. Headers can be a list of tuples or a Headers object.
Special note for mimetype and content_type: For most mime types mimetype and content_type work the same, the difference affects only ‘text’ mimetypes. If the mimetype passed with mimetype is a mimetype starting with text/, the charset parameter of the response object is appended to it. In contrast the content_type parameter is always added as header unmodified.
在 0.5 版更改: the direct_passthrough parameter was added.
参数: |
|
---|
The application iterator. If constructed from a string this will be a list, otherwise the object provided as application iterator. (The first argument passed to BaseResponse)
A Headers object representing the response headers.
The response status as integer.
If direct_passthrough=True was passed to the response object or if this attribute was set to True before using the response object as WSGI application, the wrapped iterator is returned unchanged. This makes it possible to pass a special wsgi.file_wrapper to the response object. See wrap_file() for more details.
Process this response as WSGI application.
参数: |
|
---|---|
返回: | an application iterator |
This method can be called by methods that need a sequence. If mutable is true, it will also ensure that the response sequence is a standard Python list.
0.6 新版功能.
Should this response object correct the location header to be RFC conformant? This is true by default.
0.8 新版功能.
Should this response object automatically set the content-length header if possible? This is true by default.
0.8 新版功能.
Returns the content length if available or None otherwise.
Adds a function to the internal list of functions that should be called as part of closing down the response. Since 0.7 this function also returns the function that was passed so that this can be used as a decorator.
0.6 新版功能.
the charset of the response.
Close the wrapped response if possible. You can also use the object in a with statement which will automatically close it.
0.9 新版功能: Can now be used in a with statement.
A descriptor that calls get_data() and set_data(). This should not be used and will eventually get deprecated.
the default mimetype if none is provided.
the default status if none is provided.
Delete a cookie. Fails silently if key doesn’t exist.
参数: |
|
---|
Enforce that the WSGI response is a response object of the current type. Werkzeug will use the BaseResponse internally in many situations like the exceptions. If you call get_response() on an exception you will get back a regular BaseResponse object, even if you are using a custom subclass.
This method can enforce a given response type, and it will also convert arbitrary WSGI callables into response objects if an environ is provided:
# convert a Werkzeug response object into an instance of the
# MyResponseClass subclass.
response = MyResponseClass.force_type(response)
# convert any WSGI application into a response object
response = MyResponseClass.force_type(response, environ)
This is especially useful if you want to post-process responses in the main dispatcher and use functionality provided by your subclass.
Keep in mind that this will modify response objects in place if possible!
参数: |
|
---|---|
返回: | a response object. |
Call this method if you want to make your response object ready for being pickled. This buffers the generator if there is one. It will also set the Content-Length header to the length of the body.
在 0.6 版更改: The Content-Length header is now set.
Create a new response object from an application output. This works best if you pass it an application that returns a generator all the time. Sometimes applications may use the write() callable returned by the start_response function. This tries to resolve such edge cases automatically. But if you don’t get the expected output you should set buffered to True which enforces buffering.
参数: |
|
---|---|
返回: | a response object. |
Returns the application iterator for the given environ. Depending on the request method and the current status code the return value might be an empty response rather than the one from the response.
If the request method is HEAD or the status code is in a range where the HTTP specification requires an empty response, an empty iterable is returned.
0.6 新版功能.
参数: | environ – the WSGI environment of the request. |
---|---|
返回: | a response iterable. |
The string representation of the request body. Whenever you call this property the request iterable is encoded and flattened. This can lead to unwanted behavior if you stream big data.
This behavior can be disabled by setting implicit_sequence_conversion to False.
If as_text is set to True the return value will be a decoded unicode string.
0.9 新版功能.
This is automatically called right before the response is started and returns headers modified for the given environment. It returns a copy of the headers from the response with some modifications applied if necessary.
For example the location header (if present) is joined with the root URL of the environment. Also the content length is automatically set to zero here for certain status codes.
在 0.6 版更改: Previously that function was called fix_headers and modified the response object in place. Also since 0.6, IRIs in location and content-location headers are handled properly.
Also starting with 0.6, Werkzeug will attempt to set the content length if it is able to figure it out on its own. This is the case if all the strings in the response iterable are already encoded and the iterable is buffered.
参数: | environ – the WSGI environment of the request. |
---|---|
返回: | returns a new Headers object. |
Returns the final WSGI response as tuple. The first item in the tuple is the application iterator, the second the status and the third the list of headers. The response returned is created specially for the given environment. For example if the request method in the WSGI environment is 'HEAD' the response will be empty and only the headers and status code will be present.
0.6 新版功能.
参数: | environ – the WSGI environment of the request. |
---|---|
返回: | an (app_iter, status, headers) tuple. |
if set to False accessing properties on the response object will not try to consume the response iterator and convert it into a list.
0.6.2 新版功能: That attribute was previously called implicit_seqence_conversion. (Notice the typo). If you did use this feature, you have to adapt your code to the name change.
If the iterator is buffered, this property will be True. A response object will consider an iterator to be buffered if the response attribute is a list or tuple.
0.6 新版功能.
If the response is streamed (the response is not an iterable with a length information) this property is True. In this case streamed means that there is no information about the number of iterations. This is usually True if a generator is passed to the response object.
This is useful for checking before applying some sort of post filtering that should not take place for streamed responses.
Iter the response encoded with the encoding of the response. If the response object is invoked as WSGI application the return value of this method is used as application iterator unless direct_passthrough was activated.
Converts the response iterator in a list. By default this happens automatically if required. If implicit_sequence_conversion is disabled, this method is not automatically called and some properties might raise exceptions. This also encodes all the items.
0.6 新版功能.
Sets a cookie. The parameters are the same as in the cookie Morsel object in the Python standard library but it accepts unicode data, too.
参数: |
|
---|
Sets a new string as response. The value set must either by a unicode or bytestring. If a unicode string is set it’s encoded automatically to the charset of the response (utf-8 by default).
0.9 新版功能.
The HTTP Status code
The HTTP Status code as number
Werkzeug also provides helper mixins for various HTTP related functionality such as etags, cache control, user agents etc. When subclassing you can mix those classes in to extend the functionality of the BaseRequest or BaseResponse object. Here a small example for a request object that parses accept headers:
from werkzeug.wrappers import AcceptMixin, BaseRequest
class Request(BaseRequest, AcceptMixin):
pass
The Request and Response classes subclass the BaseRequest and BaseResponse classes and implement all the mixins Werkzeug provides:
Full featured request object implementing the following mixins:
Full featured response object implementing the following mixins:
A mixin for classes with an environ attribute to get all the HTTP accept headers as Accept objects (or subclasses thereof).
List of charsets this client supports as CharsetAccept object.
List of encodings this client accepts. Encodings in a HTTP term are compression encodings such as gzip. For charsets have a look at accept_charset.
List of languages this client accepts as LanguageAccept object.
List of mimetypes this client supports as MIMEAccept object.
Adds an authorization property that represents the parsed value of the Authorization header as Authorization object.
The Authorization object in parsed form.
Add entity tag and cache descriptors to a request object or object with a WSGI environment available as environ. This not only provides access to etags but also to the cache control header.
A RequestCacheControl object for the incoming cache control headers.
The parsed If-Modified-Since header as datetime object.
The parsed If-Unmodified-Since header as datetime object.
Adds extra functionality to a response object for etag and cache handling. This mixin requires an object with at least a headers object that implements a dict like interface similar to Headers.
If you want the freeze() method to automatically add an etag, you have to mixin this method before the response base class. The default response class does not do that.
The Accept-Ranges header. Even though the name would indicate that multiple values are supported, it must be one string token only.
The values 'bytes' and 'none' are common.
0.7 新版功能.
Add an etag for the current response if there is none yet.
The Cache-Control general-header field is used to specify directives that MUST be obeyed by all caching mechanisms along the request/response chain.
The Content-Range header as ContentRange object. Even if the header is not set it wil provide such an object for easier manipulation.
0.7 新版功能.
Call this method if you want to make your response object ready for pickeling. This buffers the generator if there is one. This also sets the etag unless no_etag is set to True.
Return a tuple in the form (etag, is_weak). If there is no ETag the return value is (None, None).
Make the response conditional to the request. This method works best if an etag was defined for the response already. The add_etag method can be used to do that. If called without etag just the date header is set.
This does nothing if the request method in the request or environ is anything but GET or HEAD.
It does not remove the body of the response because that’s something the __call__() function does for us automatically.
Returns self so that you can do return resp.make_conditional(req) but modifies the object in-place.
参数: | request_or_environ – a request object or WSGI environment to be used to make the response conditional against. |
---|
Set the etag, and override the old one if there was one.
Mixin for BaseRequest subclasses. Classes that inherit from this mixin will automatically get a stream property that provides a write-only interface to the response iterable.
The response iterable as write-only stream.
A mixin for BaseRequest subclasses. Request objects that mix this class in will automatically get descriptors for a couple of HTTP headers with automatic type conversion.
0.5 新版功能.
The Content-Encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type header field.
0.9 新版功能.
The Content-Length entity-header field indicates the size of the entity-body in bytes or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET.
The Content-MD5 entity-header field, as defined in RFC 1864, is an MD5 digest of the entity-body for the purpose of providing an end-to-end message integrity check (MIC) of the entity-body. (Note: a MIC is good for detecting accidental modification of the entity-body in transit, but is not proof against malicious attacks.)
0.9 新版功能.
The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET.
The Date general-header field represents the date and time at which the message was originated, having the same semantics as orig-date in RFC 822.
The Max-Forwards request-header field provides a mechanism with the TRACE and OPTIONS methods to limit the number of proxies or gateways that can forward the request to the next inbound server.
Like content_type but without parameters (eg, without charset, type etc.). For example if the content type is text/html; charset=utf-8 the mimetype would be 'text/html'.
The mimetype parameters as dict. For example if the content type is text/html; charset=utf-8 the params would be {'charset': 'utf-8'}.
The Pragma general-header field is used to include implementation-specific directives that might apply to any recipient along the request/response chain. All pragma directives specify optional behavior from the viewpoint of the protocol; however, some systems MAY require that behavior be consistent with the directives.
The Referer[sic] request-header field allows the client to specify, for the server’s benefit, the address (URI) of the resource from which the Request-URI was obtained (the “referrer”, although the header field is misspelled).
A mixin for BaseResponse subclasses. Response objects that mix this class in will automatically get descriptors for a couple of HTTP headers with automatic type conversion.
The Age response-header field conveys the sender’s estimate of the amount of time since the response (or its revalidation) was generated at the origin server.
Age values are non-negative decimal integers, representing time in seconds.
The Allow entity-header field lists the set of methods supported by the resource identified by the Request-URI. The purpose of this field is strictly to inform the recipient of valid methods associated with the resource. An Allow header field MUST be present in a 405 (Method Not Allowed) response.
The Content-Encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type header field.
The Content-Language entity-header field describes the natural language(s) of the intended audience for the enclosed entity. Note that this might not be equivalent to all the languages used within the entity-body.
The Content-Length entity-header field indicates the size of the entity-body, in decimal number of OCTETs, sent to the recipient or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET.
The Content-Location entity-header field MAY be used to supply the resource location for the entity enclosed in the message when that entity is accessible from a location separate from the requested resource’s URI.
The Content-MD5 entity-header field, as defined in RFC 1864, is an MD5 digest of the entity-body for the purpose of providing an end-to-end message integrity check (MIC) of the entity-body. (Note: a MIC is good for detecting accidental modification of the entity-body in transit, but is not proof against malicious attacks.)
The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET.
The Date general-header field represents the date and time at which the message was originated, having the same semantics as orig-date in RFC 822.
The Expires entity-header field gives the date/time after which the response is considered stale. A stale cache entry may not normally be returned by a cache.
The Last-Modified entity-header field indicates the date and time at which the origin server believes the variant was last modified.
The Location response-header field is used to redirect the recipient to a location other than the Request-URI for completion of the request or identification of a new resource.
The mimetype (content type without charset etc.)
The mimetype parameters as dict. For example if the content type is text/html; charset=utf-8 the params would be {'charset': 'utf-8'}.
0.5 新版功能.
The Retry-After response-header field can be used with a 503 (Service Unavailable) response to indicate how long the service is expected to be unavailable to the requesting client.
Time in seconds until expiration or date.
The Vary field value indicates the set of request-header fields that fully determines, while the response is fresh, whether a cache is permitted to use the response to reply to a subsequent request without revalidation.
Adds a www_authenticate property to a response object.
The WWW-Authenticate header in a parsed form.
When it comes to combining multiple controller or view functions (however you want to call them), you need a dispatcher. A simple way would be applying regular expression tests on PATH_INFO and call registered callback functions that return the value.
Werkzeug provides a much more powerful system, similar to Routes. All the objects mentioned on this page must be imported from werkzeug.routing, not from werkzeug!
Here is a simple example which could be the URL definition for a blog:
from werkzeug.routing import Map, Rule, NotFound, RequestRedirect
url_map = Map([
Rule('/', endpoint='blog/index'),
Rule('/<int:year>/', endpoint='blog/archive'),
Rule('/<int:year>/<int:month>/', endpoint='blog/archive'),
Rule('/<int:year>/<int:month>/<int:day>/', endpoint='blog/archive'),
Rule('/<int:year>/<int:month>/<int:day>/<slug>',
endpoint='blog/show_post'),
Rule('/about', endpoint='blog/about_me'),
Rule('/feeds/', endpoint='blog/feeds'),
Rule('/feeds/<feed_name>.rss', endpoint='blog/show_feed')
])
def application(environ, start_response):
urls = url_map.bind_to_environ(environ)
try:
endpoint, args = urls.match()
except HTTPException, e:
return e(environ, start_response)
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Rule points to %r with arguments %r' % (endpoint, args)]
So what does that do? First of all we create a new Map which stores a bunch of URL rules. Then we pass it a list of Rule objects.
Each Rule object is instantiated with a string that represents a rule and an endpoint which will be the alias for what view the rule represents. Multiple rules can have the same endpoint, but should have different arguments to allow URL construction.
The format for the URL rules is straightforward, but explained in detail below.
Inside the WSGI application we bind the url_map to the current request which will return a new MapAdapter. This url_map adapter can then be used to match or build domains for the current request.
The MapAdapter.match() method can then either return a tuple in the form (endpoint, args) or raise one of the three exceptions NotFound, MethodNotAllowed, or RequestRedirect. For more details about those exceptions have a look at the documentation of the MapAdapter.match() method.
Rule strings basically are just normal URL paths with placeholders in the format <converter(arguments):name>, where converter and the arguments are optional. If no converter is defined, the default converter is used (which means string in the normal configuration).
URL rules that end with a slash are branch URLs, others are leaves. If you have strict_slashes enabled (which is the default), all branch URLs that are visited without a trailing slash will trigger a redirect to the same URL with that slash appended.
The list of converters can be extended, the default converters are explained below.
Here a list of converters that come with Werkzeug:
This converter is the default converter and accepts any string but only one path segment. Thus the string can not include a slash.
This is the default validator.
Example:
Rule('/pages/<page>'),
Rule('/<string(length=2):lang_code>')
参数: |
|
---|
Like the default UnicodeConverter, but it also matches slashes. This is useful for wikis and similar applications:
Rule('/<path:wikipage>')
Rule('/<path:wikipage>/edit')
参数: | map – the Map. |
---|
Matches one of the items provided. Items can either be Python identifiers or strings:
Rule('/<any(about, help, imprint, class, "foo,bar"):page_name>')
参数: |
|
---|
This converter only accepts integer values:
Rule('/page/<int:page>')
This converter does not support negative values.
参数: |
|
---|
The map class stores all the URL rules and some configuration parameters. Some of the configuration values are only stored on the Map instance since those affect all rules, others are just defaults and can be overridden for each rule. Note that you have to specify all arguments besides the rules as keyword arguments!
参数: |
|
---|
0.5 新版功能: sort_parameters and sort_key was added.
0.7 新版功能: encoding_errors and host_matching was added.
The dictionary of converters. This can be modified after the class was created, but will only affect rules added after the modification. If the rules are defined with the list passed to the class, the converters parameter to the constructor has to be used instead.
Add a new rule or factory to the map and bind it. Requires that the rule is not bound to another map.
参数: | rulefactory – a Rule or RuleFactory |
---|
Return a new MapAdapter with the details specified to the call. Note that script_name will default to '/' if not further specified or None. The server_name at least is a requirement because the HTTP RFC requires absolute URLs for redirects and so all redirect exceptions raised by Werkzeug will contain the full canonical URL.
If no path_info is passed to match() it will use the default path info passed to bind. While this doesn’t really make sense for manual bind calls, it’s useful if you bind a map to a WSGI environment which already contains the path info.
subdomain will default to the default_subdomain for this map if no defined. If there is no default_subdomain you cannot use the subdomain feature.
0.7 新版功能: query_args added
0.8 新版功能: query_args can now also be a string.
Like bind() but you can pass it an WSGI environment and it will fetch the information from that dictionary. Note that because of limitations in the protocol there is no way to get the current subdomain and real server_name from the environment. If you don’t provide it, Werkzeug will use SERVER_NAME and SERVER_PORT (or HTTP_HOST if provided) as used server_name with disabled subdomain feature.
If subdomain is None but an environment and a server name is provided it will calculate the current subdomain automatically. Example: server_name is 'example.com' and the SERVER_NAME in the wsgi environ is 'staging.dev.example.com' the calculated subdomain will be 'staging.dev'.
If the object passed as environ has an environ attribute, the value of this attribute is used instead. This allows you to pass request objects. Additionally PATH_INFO added as a default of the MapAdapter so that you don’t have to pass the path info to the match method.
在 0.5 版更改: previously this method accepted a bogus calculate_subdomain parameter that did not have any effect. It was removed because of that.
在 0.8 版更改: This will no longer raise a ValueError when an unexpected server name was passed.
参数: |
|
---|
0.6 新版功能: a dict of default converters to be used.
Iterate over all rules and check if the endpoint expects the arguments provided. This is for example useful if you have some URLs that expect a language code and others that do not and you want to wrap the builder a bit so that the current language code is automatically added if not provided but endpoints expect it.
参数: |
|
---|
Iterate over all rules or the rules of an endpoint.
参数: | endpoint – if provided only the rules for that endpoint are returned. |
---|---|
返回: | an iterator |
Called before matching and building to keep the compiled rules in the correct order after things changed.
Returned by Map.bind() or Map.bind_to_environ() and does the URL matching and building based on runtime information.
Returns the valid methods that match for a given path.
0.7 新版功能.
Building URLs works pretty much the other way round. Instead of match you call build and pass it the endpoint and a dict of arguments for the placeholders.
The build function also accepts an argument called force_external which, if you set it to True will force external URLs. Per default external URLs (include the server name) will only be used if the target URL is on a different subdomain.
>>> m = Map([
... Rule('/', endpoint='index'),
... Rule('/downloads/', endpoint='downloads/index'),
... Rule('/downloads/<int:id>', endpoint='downloads/show')
... ])
>>> urls = m.bind("example.com", "/")
>>> urls.build("index", {})
'/'
>>> urls.build("downloads/show", {'id': 42})
'/downloads/42'
>>> urls.build("downloads/show", {'id': 42}, force_external=True)
'http://example.com/downloads/42'
Because URLs cannot contain non ASCII data you will always get bytestrings back. Non ASCII characters are urlencoded with the charset defined on the map instance.
Additional values are converted to unicode and appended to the URL as URL querystring parameters:
>>> urls.build("index", {'q': 'My Searchstring'})
'/?q=My+Searchstring'
If a rule does not exist when building a BuildError exception is raised.
The build method accepts an argument called method which allows you to specify the method you want to have an URL built for if you have different methods for the same endpoint specified.
0.6 新版功能: the append_unknown parameter was added.
参数: |
|
---|
Does the complete dispatching process. view_func is called with the endpoint and a dict with the values for the view. It should look up the view function, call it, and return a response object or WSGI application. http exceptions are not caught by default so that applications can display nicer error messages by just catching them by hand. If you want to stick with the default error messages you can pass it catch_http_exceptions=True and it will catch the http exceptions.
Here a small example for the dispatch usage:
from werkzeug.wrappers import Request, Response
from werkzeug.wsgi import responder
from werkzeug.routing import Map, Rule
def on_index(request):
return Response('Hello from the index')
url_map = Map([Rule('/', endpoint='index')])
views = {'index': on_index}
@responder
def application(environ, start_response):
request = Request(environ)
urls = url_map.bind_to_environ(environ)
return urls.dispatch(lambda e, v: views[e](request, **v),
catch_http_exceptions=True)
Keep in mind that this method might return exception objects, too, so use Response.force_type to get a response object.
参数: |
|
---|
A helper that returns the URL to redirect to if it finds one. This is used for default redirecting only.
Internal: |
---|
Figures out the full host name for the given domain part. The domain part is a subdomain in case host matching is disabled or a full host name.
Internally called to make an alias redirect URL.
Creates a redirect URL.
Internal: |
---|
The usage is simple: you just pass the match method the current path info as well as the method (which defaults to GET). The following things can then happen:
If the path info is not passed to the match method the default path info of the map is used (defaults to the root URL if not defined explicitly).
All of the exceptions raised are subclasses of HTTPException so they can be used as WSGI responses. The will all render generic error or redirect pages.
Here is a small example for matching:
>>> m = Map([
... Rule('/', endpoint='index'),
... Rule('/downloads/', endpoint='downloads/index'),
... Rule('/downloads/<int:id>', endpoint='downloads/show')
... ])
>>> urls = m.bind("example.com", "/")
>>> urls.match("/", "GET")
('index', {})
>>> urls.match("/downloads/42")
('downloads/show', {'id': 42})
And here is what happens on redirect and missing URLs:
>>> urls.match("/downloads")
Traceback (most recent call last):
...
RequestRedirect: http://example.com/downloads/
>>> urls.match("/missing")
Traceback (most recent call last):
...
NotFound: 404 Not Found
参数: |
|
---|
0.6 新版功能: return_rule was added.
0.7 新版功能: query_args was added.
在 0.8 版更改: query_args can now also be a string.
Test if a rule would match. Works like match but returns True if the URL matches, or False if it does not exist.
参数: |
|
---|
A Rule represents one URL pattern. There are some options for Rule that change the way it behaves and are passed to the Rule constructor. Note that besides the rule-string all arguments must be keyword arguments in order to not break the application on Werkzeug upgrades.
Rule strings basically are just normal URL paths with placeholders in the format <converter(arguments):name> where the converter and the arguments are optional. If no converter is defined the default converter is used which means string in the normal configuration.
URL rules that end with a slash are branch URLs, others are leaves. If you have strict_slashes enabled (which is the default), all branch URLs that are matched without a trailing slash will trigger a redirect to the same URL with the missing slash appended.
The converters are defined on the Map.
An optional dict with defaults for other rules with the same endpoint. This is a bit tricky but useful if you want to have unique URLs:
url_map = Map([
Rule('/all/', defaults={'page': 1}, endpoint='all_entries'),
Rule('/all/page/<int:page>', endpoint='all_entries')
])
If a user now visits http://example.com/all/page/1 he will be redirected to http://example.com/all/. If redirect_defaults is disabled on the Map instance this will only affect the URL generation.
The subdomain rule string for this rule. If not specified the rule only matches for the default_subdomain of the map. If the map is not bound to a subdomain this feature is disabled.
Can be useful if you want to have user profiles on different subdomains and all subdomains are forwarded to your application:
url_map = Map([
Rule('/', subdomain='<username>', endpoint='user/homepage'),
Rule('/stats', subdomain='<username>', endpoint='user/stats')
])
A sequence of http methods this rule applies to. If not specified, all methods are allowed. For example this can be useful if you want different endpoints for POST and GET. If methods are defined and the path matches but the method matched against is not in this list or in the list of another rule for that path the error raised is of the type MethodNotAllowed rather than NotFound. If GET is present in the list of methods and HEAD is not, HEAD is added automatically.
在 0.6.1 版更改: HEAD is now automatically added to the methods if GET is present. The reason for this is that existing code often did not work properly in servers not rewriting HEAD to GET automatically and it was not documented how HEAD should be treated. This was considered a bug in Werkzeug because of that.
If given this must be either a string or callable. In case of a callable it’s called with the url adapter that triggered the match and the values of the URL as keyword arguments and has to return the target for the redirect, otherwise it has to be a string with placeholders in rule syntax:
def foo_with_slug(adapter, id):
# ask the database for the slug for the old id. this of
# course has nothing to do with werkzeug.
return 'foo/' + Foo.get_slug_for_id(id)
url_map = Map([
Rule('/foo/<slug>', endpoint='foo'),
Rule('/some/old/url/<slug>', redirect_to='foo/<slug>'),
Rule('/other/old/url/<int:id>', redirect_to=foo_with_slug)
])
When the rule is matched the routing system will raise a RequestRedirect exception with the target for the redirect.
Keep in mind that the URL will be joined against the URL root of the script so don’t use a leading slash on the target URL unless you really mean root of that domain.
0.7 新版功能: The alias and host parameters were added.
Return an unbound copy of this rule. This can be useful if you want to reuse an already bound URL for another map.
As soon as you have more complex URL setups it’s a good idea to use rule factories to avoid repetitive tasks. Some of them are builtin, others can be added by subclassing RuleFactory and overriding get_rules.
Subclasses of RuleFactory have to override this method and return an iterable of rules.
All URLs provided by this factory have the subdomain set to a specific domain. For example if you want to use the subdomain for the current language this can be a good setup:
url_map = Map([
Rule('/', endpoint='#select_language'),
Subdomain('<string(length=2):lang_code>', [
Rule('/', endpoint='index'),
Rule('/about', endpoint='about'),
Rule('/help', endpoint='help')
])
])
All the rules except for the '#select_language' endpoint will now listen on a two letter long subdomain that holds the language code for the current request.
Like Subdomain but prefixes the URL rule with a given string:
url_map = Map([
Rule('/', endpoint='index'),
Submount('/blog', [
Rule('/', endpoint='blog/index'),
Rule('/entry/<entry_slug>', endpoint='blog/show')
])
])
Now the rule 'blog/show' matches /blog/entry/<entry_slug>.
Prefixes all endpoints (which must be strings for this factory) with another string. This can be useful for sub applications:
url_map = Map([
Rule('/', endpoint='index'),
EndpointPrefix('blog/', [Submount('/blog', [
Rule('/', endpoint='index'),
Rule('/entry/<entry_slug>', endpoint='show')
])])
])
Returns copies of the rules wrapped and expands string templates in the endpoint, rule, defaults or subdomain sections.
Here a small example for such a rule template:
from werkzeug.routing import Map, Rule, RuleTemplate
resource = RuleTemplate([
Rule('/$name/', endpoint='$name.list'),
Rule('/$name/<int:id>', endpoint='$name.show')
])
url_map = Map([resource(name='user'), resource(name='page')])
When a rule template is called the keyword arguments are used to replace the placeholders in all the string parameters.
You can easily add custom converters. The only thing you have to do is to subclass BaseConverter and pass that new converter to the url_map. A converter has to provide two public methods: to_python and to_url, as well as a member that represents a regular expression. Here is a small example:
from random import randrange
from werkzeug.routing import Rule, Map, BaseConverter, ValidationError
class BooleanConverter(BaseConverter):
def __init__(self, url_map, randomify=False):
super(BooleanConverter, self).__init__(url_map)
self.randomify = randomify
self.regex = '(?:yes|no|maybe)'
def to_python(self, value):
if value == 'maybe':
if self.randomify:
return not randrange(2)
raise ValidationError()
return value == 'yes'
def to_url(self, value):
return value and 'yes' or 'no'
url_map = Map([
Rule('/vote/<bool:werkzeug_rocks>', endpoint='vote'),
Rule('/vote/<bool(randomify=True):foo>', endpoint='foo')
], converters={'bool': BooleanConverter})
If you want that converter to be the default converter, name it 'default'.
0.7 新版功能.
Starting with Werkzeug 0.7 it’s also possible to do matching on the whole host names instead of just the subdomain. To enable this feature you need to pass host_matching=True to the Map constructor and provide the host argument to all routes:
url_map = Map([
Rule('/', endpoint='www_index', host='www.example.com'),
Rule('/', endpoint='help_index', host='help.example.com')
], host_matching=True)
Variable parts are of course also possible in the host section:
url_map = Map([
Rule('/', endpoint='www_index', host='www.example.com'),
Rule('/', endpoint='user_index', host='<user>.example.com')
], host_matching=True)
The following classes and functions are designed to make working with the WSGI specification easier or operate on the WSGI layer. All the functionality from this module is available on the high-level Request/Response classes.
These classes and functions simplify working with the WSGI application iterator and the input stream.
The WSGI specification requires that all middlewares and gateways respect the close callback of an iterator. Because it is useful to add another close action to a returned iterator and adding a custom iterator is a boring task this class can be used for that:
return ClosingIterator(app(environ, start_response), [cleanup_session,
cleanup_locals])
If there is just one close function it can be passed instead of the list.
A closing iterator is not needed if the application uses response objects and finishes the processing if the response is started:
try:
return response(environ, start_response)
finally:
cleanup_session()
cleanup_locals()
This class can be used to convert a file-like object into an iterable. It yields buffer_size blocks until the file is fully read.
You should not use this class directly but rather use the wrap_file() function that uses the WSGI server’s file wrapper support if it’s available.
0.5 新版功能.
If you’re using this object together with a BaseResponse you have to use the direct_passthrough mode.
参数: |
|
---|
Wraps a stream so that it doesn’t read more than n bytes. If the stream is exhausted and the caller tries to get more bytes from it on_exhausted() is called which by default returns an empty string. The return value of that function is forwarded to the reader function. So if it returns an empty string read() will return an empty string as well.
The limit however must never be higher than what the stream can output. Otherwise readlines() will try to read past the limit.
Note on WSGI compliance
calls to readline() and readlines() are not WSGI compliant because it passes a size argument to the readline methods. Unfortunately the WSGI PEP is not safely implementable without a size argument to readline() because there is no EOF marker in the stream. As a result of that the use of readline() is discouraged.
For the same reason iterating over the LimitedStream is not portable. It internally calls readline().
We strongly suggest using read() only or using the make_line_iter() which safely iterates line-based over a WSGI input stream.
参数: |
|
---|
Exhaust the stream. This consumes all the data left until the limit is reached.
参数: | chunk_size – the size for a chunk. It will read the chunk until the stream is exhausted and throw away the results. |
---|
If the stream is exhausted this attribute is True.
What should happen if a disconnect is detected? The return value of this function is returned from read functions in case the client went away. By default a ClientDisconnected exception is raised.
This is called when the stream tries to read past the limit. The return value of this function is returned from the reading function.
Read size bytes or if size is not provided everything is read.
参数: | size – the number of bytes read. |
---|
Reads one line from the stream.
Reads a file into a list of strings. It calls readline() until the file is read to the end. It does support the optional size argument if the underlaying stream supports it for readline.
Returns the position of the stream.
0.9 新版功能.
Safely iterates line-based over an input stream. If the input stream is not a LimitedStream the limit parameter is mandatory.
This uses the stream’s read() method internally as opposite to the readline() method that is unsafe and can only be used in violation of the WSGI specification. The same problem applies to the __iter__ function of the input stream which calls readline() without arguments.
If you need line-by-line processing it’s strongly recommended to iterate over the input stream using this helper function.
在 0.8 版更改: This function now ensures that the limit was reached.
0.9 新版功能: added support for iterators as input stream.
参数: |
|
---|
Works like make_line_iter() but accepts a separator which divides chunks. If you want newline based processing you should use make_line_iter() instead as it supports arbitrary newline markers.
0.8 新版功能.
0.9 新版功能: added support for iterators as input stream.
参数: |
|
---|
Wraps a file. This uses the WSGI server’s file wrapper if available or otherwise the generic FileWrapper.
0.5 新版功能.
If the file wrapper from the WSGI server is used it’s important to not iterate over it from inside the application but to pass it through unchanged. If you want to pass out a file wrapper inside a response object you have to set direct_passthrough to True.
More information about file wrappers are available in PEP 333.
参数: |
|
---|
These functions operate on the WSGI environment. They extract useful information or perform common manipulations:
Return the real host for the given WSGI environment. This takes care of the X-Forwarded-Host header. Optionally it verifies that the host is in a list of trusted hosts. If the host is not in there it will raise a SecurityError.
参数: |
|
---|
Returns the content length from the WSGI environment as integer. If it’s not available None is returned.
0.9 新版功能.
参数: | environ – the WSGI environ to fetch the content length from. |
---|
Returns the input stream from the WSGI environment and wraps it in the most sensible way possible. The stream returned is not the raw WSGI stream in most cases but one that is safe to read from without taking into account the content length.
0.9 新版功能.
参数: |
|
---|
A handy helper function that recreates the full URL for the current request or parts of it. Here an example:
>>> from werkzeug.test import create_environ
>>> env = create_environ("/?param=foo", "http://localhost/script")
>>> get_current_url(env)
'http://localhost/script/?param=foo'
>>> get_current_url(env, root_only=True)
'http://localhost/script/'
>>> get_current_url(env, host_only=True)
'http://localhost/'
>>> get_current_url(env, strip_querystring=True)
'http://localhost/script/'
This optionally it verifies that the host is in a list of trusted hosts. If the host is not in there it will raise a SecurityError.
参数: |
|
---|
Returns the QUERY_STRING from the WSGI environment. This also takes care about the WSGI decoding dance on Python 3 environments as a native string. The string returned will be restricted to ASCII characters.
0.9 新版功能.
参数: | environ – the WSGI environment object to get the query string from. |
---|
Returns the SCRIPT_NAME from the WSGI environment and properly decodes it. This also takes care about the WSGI decoding dance on Python 3 environments. if the charset is set to None a bytestring is returned.
0.9 新版功能.
参数: |
|
---|
Returns the PATH_INFO from the WSGI environment and properly decodes it. This also takes care about the WSGI decoding dance on Python 3 environments. if the charset is set to None a bytestring is returned.
0.9 新版功能.
参数: |
|
---|
Removes and returns the next segment of PATH_INFO, pushing it onto SCRIPT_NAME. Returns None if there is nothing left on PATH_INFO.
If the charset is set to None a bytestring is returned.
If there are empty segments ('/foo//bar) these are ignored but properly pushed to the SCRIPT_NAME:
>>> env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b'}
>>> pop_path_info(env)
'a'
>>> env['SCRIPT_NAME']
'/foo/a'
>>> pop_path_info(env)
'b'
>>> env['SCRIPT_NAME']
'/foo/a/b'
0.5 新版功能.
在 0.9 版更改: The path is now decoded and a charset and encoding parameter can be provided.
参数: | environ – the WSGI environment that is modified. |
---|
Returns the next segment on the PATH_INFO or None if there is none. Works like pop_path_info() without modifying the environment:
>>> env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b'}
>>> peek_path_info(env)
'a'
>>> peek_path_info(env)
'a'
If the charset is set to None a bytestring is returned.
0.5 新版功能.
在 0.9 版更改: The path is now decoded and a charset and encoding parameter can be provided.
参数: | environ – the WSGI environment that is checked. |
---|
Extracts the path info from the given URL (or WSGI environment) and path. The path info returned is a unicode string, not a bytestring suitable for a WSGI environment. The URLs might also be IRIs.
If the path info could not be determined, None is returned.
Some examples:
>>> extract_path_info('http://example.com/app', '/app/hello')
u'/hello'
>>> extract_path_info('http://example.com/app',
... 'https://example.com/app/hello')
u'/hello'
>>> extract_path_info('http://example.com/app',
... 'https://example.com/app/hello',
... collapse_http_schemes=False) is None
True
Instead of providing a base URL you can also pass a WSGI environment.
0.6 新版功能.
参数: |
|
---|
Checks if a host is trusted against a list. This also takes care of port normalization.
0.9 新版功能.
参数: |
|
---|
Marks a function as responder. Decorate a function with it and it will automatically call the return value as WSGI application.
Example:
@responder
def application(environ, start_response):
return Response('Hello World!')
Simple test application that dumps the environment. You can use it to check if Werkzeug is working properly:
>>> from werkzeug.serving import run_simple
>>> from werkzeug.testapp import test_app
>>> run_simple('localhost', 3000, test_app)
* Running on http://localhost:3000/
The application displays important information from the WSGI environment, the Python interpreter and the installed libraries.
Werkzeug provides a couple of functions to parse and generate HTTP headers that are useful when implementing WSGI middlewares or whenever you are operating on a lower level layer. All this functionality is also exposed from request and response objects.
The following functions simplify working with times in an HTTP context. Werkzeug uses offset-naive datetime objects internally that store the time in UTC. If you’re working with timezones in your application make sure to replace the tzinfo attribute with a UTC timezone information before processing the values.
Formats the time to ensure compatibility with Netscape’s cookie standard.
Accepts a floating point number expressed in seconds since the epoch in, a datetime object or a timetuple. All times in UTC. The parse_date() function can be used to parse such a date.
Outputs a string in the format Wdy, DD-Mon-YYYY HH:MM:SS GMT.
参数: | expires – If provided that date is used, otherwise the current. |
---|
Formats the time to match the RFC1123 date format.
Accepts a floating point number expressed in seconds since the epoch in, a datetime object or a timetuple. All times in UTC. The parse_date() function can be used to parse such a date.
Outputs a string in the format Wdy, DD Mon YYYY HH:MM:SS GMT.
参数: | timestamp – If provided that date is used, otherwise the current. |
---|
Parse one of the following date formats into a datetime object:
Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036
Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format
If parsing fails the return value is None.
参数: | value – a string with a supported date format. |
---|---|
返回: | a datetime.datetime object. |
The following functions can be used to parse incoming HTTP headers. Because Python does not provide data structures with the semantics required by RFC 2616, Werkzeug implements some custom data structures that are documented separately.
Parse a Content-Type like header into a tuple with the content type and the options:
>>> parse_options_header('text/html; charset=utf8')
('text/html', {'charset': 'utf8'})
This should not be used to parse Cache-Control like headers that use a slightly different format. For these headers use the parse_dict_header() function.
0.5 新版功能.
参数: | value – the header to parse. |
---|---|
返回: | (str, options) |
Parse a set-like header and return a HeaderSet object:
>>> hs = parse_set_header('token, "quoted value"')
The return value is an object that treats the items case-insensitively and keeps the order of the items:
>>> 'TOKEN' in hs
True
>>> hs.index('quoted value')
1
>>> hs
HeaderSet(['token', 'quoted value'])
To create a header from the HeaderSet again, use the dump_header() function.
参数: |
|
---|---|
返回: |
Parse lists as described by RFC 2068 Section 2.
In particular, parse comma-separated lists where the elements of the list may include quoted-strings. A quoted-string could contain a comma. A non-quoted string could have quotes in the middle. Quotes are removed automatically after parsing.
It basically works like parse_set_header() just that items may appear multiple times and case sensitivity is preserved.
The return value is a standard list:
>>> parse_list_header('token, "quoted value"')
['token', 'quoted value']
To create a header from the list again, use the dump_header() function.
参数: | value – a string with a list header. |
---|---|
返回: | list |
Parse lists of key, value pairs as described by RFC 2068 Section 2 and convert them into a python dict (or any other mapping object created from the type with a dict like interface provided by the cls arugment):
>>> d = parse_dict_header('foo="is a fish", bar="as well"')
>>> type(d) is dict
True
>>> sorted(d.items())
[('bar', 'as well'), ('foo', 'is a fish')]
If there is no value for a key it will be None:
>>> parse_dict_header('key_without_value')
{'key_without_value': None}
To create a header from the dict again, use the dump_header() function.
在 0.9 版更改: Added support for cls argument.
参数: |
|
---|---|
返回: | an instance of cls |
Parses an HTTP Accept-* header. This does not implement a complete valid algorithm but one that supports at least value and quality extraction.
Returns a new Accept object (basically a list of (value, quality) tuples sorted by the quality with some additional accessor methods).
The second parameter can be a subclass of Accept that is created with the parsed values and returned.
参数: |
|
---|---|
返回: | an instance of cls. |
Parse a cache control header. The RFC differs between response and request cache control, this method does not. It’s your responsibility to not use the wrong control statements.
0.5 新版功能: The cls was added. If not specified an immutable RequestCacheControl is returned.
参数: |
|
---|---|
返回: | a cls object. |
Parse an HTTP basic/digest authorization header transmitted by the web browser. The return value is either None if the header was invalid or not given, otherwise an Authorization object.
参数: | value – the authorization header to parse. |
---|---|
返回: | a Authorization object or None. |
Parse an HTTP WWW-Authenticate header into a WWWAuthenticate object.
参数: |
|
---|---|
返回: | a WWWAuthenticate object. |
Parses an if-range header which can be an etag or a date. Returns a IfRange object.
0.7 新版功能.
Parses a range header into a Range object. If the header is missing or malformed None is returned. ranges is a list of (start, stop) tuples where the ranges are non-inclusive.
0.7 新版功能.
Parses a range header into a ContentRange object or None if parsing is not possible.
0.7 新版功能.
参数: |
|
---|
The following utilities operate on HTTP headers well but do not parse them. They are useful if you’re dealing with conditional responses or if you want to proxy arbitrary requests but want to remove WSGI-unsupported hop-by-hop headers. Also there is a function to create HTTP header strings from the parsed data.
Check if a header is an entity header.
0.5 新版功能.
参数: | header – the header to test. |
---|---|
返回: | True if it’s an entity header, False otherwise. |
Check if a header is an HTTP/1.1 “Hop-by-Hop” header.
0.5 新版功能.
参数: | header – the header to test. |
---|---|
返回: | True if it’s an entity header, False otherwise. |
Remove all entity headers from a list or Headers object. This operation works in-place. Expires and Content-Location headers are by default not removed. The reason for this is RFC 2616 section 10.3.5 which specifies some entity headers that should be sent.
在 0.5 版更改: added allowed parameter.
参数: |
|
---|
Remove all HTTP/1.1 “Hop-by-Hop” headers from a list or Headers object. This operation works in-place.
0.5 新版功能.
参数: | headers – a list or Headers object. |
---|
Checks if a given byte content range is valid for the given length.
0.7 新版功能.
Quote a header value if necessary.
0.5 新版功能.
参数: |
|
---|
Unquotes a header value. (Reversal of quote_header_value()). This does not use the real unquoting but what browsers are actually using for quoting.
0.5 新版功能.
参数: | value – the header value to unquote. |
---|
Dump an HTTP header again. This is the reversal of parse_list_header(), parse_set_header() and parse_dict_header(). This also quotes strings that include an equals sign unless you pass it as dict of key, value pairs.
>>> dump_header({'foo': 'bar baz'})
'foo="bar baz"'
>>> dump_header(('foo', 'bar baz'))
'foo, "bar baz"'
参数: |
|
---|
Parse a cookie. Either from a string or WSGI environ.
Per default encoding errors are ignored. If you want a different behavior you can set errors to 'replace' or 'strict'. In strict mode a HTTPUnicodeError is raised.
在 0.5 版更改: This function now returns a TypeConversionDict instead of a regular dict. The cls parameter was added.
参数: |
|
---|
Creates a new Set-Cookie header without the Set-Cookie prefix The parameters are the same as in the cookie Morsel object in the Python standard library but it accepts unicode data, too.
On Python 3 the return value of this function will be a unicode string, on Python 2 it will be a native string. In both cases the return value is usually restricted to ascii as the vast majority of values are properly escaped, but that is no guarantee. If a unicode string is returned it’s tunneled through latin1 as required by PEP 3333.
The return value is not ASCII safe if the key contains unicode characters. This is technically against the specification but happens in the wild. It’s strongly recommended to not use non-ASCII values for the keys.
参数: |
|
---|
For conditional responses the following functions might be useful:
Parse an etag header.
参数: | value – the tag header to parse |
---|---|
返回: | an ETags object. |
Quote an etag.
参数: |
|
---|
Unquote a single etag:
>>> unquote_etag('w/"bar"')
('bar', True)
>>> unquote_etag('"bar"')
('bar', False)
参数: | etag – the etag identifier to unquote. |
---|---|
返回: | a (etag, weak) tuple. |
Generate an etag for some data.
Convenience method for conditional requests.
参数: |
|
---|---|
返回: | True if the resource was modified, otherwise False. |
A dict of status code -> default status message pairs. This is used by the wrappers and other places where an integer status code is expanded to a string throughout Werkzeug.
Werkzeug provides the form parsing functions separately from the request object so that you can access form data from a plain WSGI environment.
The following formats are currently supported by the form data parser:
Nested multipart is not currently supported (Werkzeug 0.9), but it isn’t used by any of the modern web browsers.
Usage example:
>>> from cStringIO import StringIO
>>> data = '--foo\r\nContent-Disposition: form-data; name="test"\r\n' \
... '\r\nHello World!\r\n--foo--'
>>> environ = {'wsgi.input': StringIO(data), 'CONTENT_LENGTH': str(len(data)),
... 'CONTENT_TYPE': 'multipart/form-data; boundary=foo',
... 'REQUEST_METHOD': 'POST'}
>>> stream, form, files = parse_form_data(environ)
>>> stream.read()
''
>>> form['test']
u'Hello World!'
>>> not files
True
Normally the WSGI environment is provided by the WSGI gateway with the incoming data as part of it. If you want to generate such fake-WSGI environments for unittesting you might want to use the create_environ() function or the EnvironBuilder instead.
This class implements parsing of form data for Werkzeug. By itself it can parse multipart and url encoded form data. It can be subclassed and extended but for most mimetypes it is a better idea to use the untouched stream and expose it as separate attributes on a request object.
0.8 新版功能.
参数: |
|
---|
Parse the form data in the environ and return it as tuple in the form (stream, form, files). You should only call this method if the transport method is POST, PUT, or PATCH.
If the mimetype of the data transmitted is multipart/form-data the files multidict will be filled with FileStorage objects. If the mimetype is unknown the input stream is wrapped and returned as first argument, else the stream is empty.
This is a shortcut for the common usage of FormDataParser.
Have a look at Dealing with Request Data for more details.
0.5 新版功能: The max_form_memory_size, max_content_length and cls parameters were added.
0.5.1 新版功能: The optional silent flag was added.
参数: |
|
---|---|
返回: | A tuple in the form (stream, form, files). |
Parses multipart headers from an iterable that yields lines (including the trailing newline symbol). The iterable has to be newline terminated.
The iterable will stop at the line where the headers ended so it can be further consumed.
参数: | iterable – iterable of strings that are newline terminated |
---|
Werkzeug provides some subclasses of common Python objects to extend them with additional features. Some of them are used to make them immutable, others are used to change some semantics to better work with HTTP.
在 0.6 版更改: The general purpose classes are now pickleable in each protocol as long as the contained objects are pickleable. This means that the FileMultiDict won’t be pickleable as soon as it contains a file.
Works like a regular dict but the get() method can perform type conversions. MultiDict and CombinedMultiDict are subclasses of this class and provide the same feature.
0.5 新版功能.
Return the default value if the requested data doesn’t exist. If type is provided and is a callable it should convert the value, return it or raise a ValueError if that is not possible. In this case the function will return the default as if the value was not found:
>>> d = TypeConversionDict(foo='42', bar='blub')
>>> d.get('foo', type=int)
42
>>> d.get('bar', -1, type=int)
-1
参数: |
|
---|
Works like a TypeConversionDict but does not support modifications.
0.5 新版功能.
A MultiDict is a dictionary subclass customized to deal with multiple values for the same key which is for example used by the parsing functions in the wrappers. This is necessary because some HTML form elements pass multiple values for the same key.
MultiDict implements all standard dictionary methods. Internally, it saves all values for a key as a list, but the standard dict access methods will only return the first value for a key. If you want to gain access to the other values, too, you have to use the list methods as explained below.
Basic Usage:
>>> d = MultiDict([('a', 'b'), ('a', 'c')])
>>> d
MultiDict([('a', 'b'), ('a', 'c')])
>>> d['a']
'b'
>>> d.getlist('a')
['b', 'c']
>>> 'a' in d
True
It behaves like a normal dict thus all dict functions will only return the first value when multiple values for one key are found.
From Werkzeug 0.3 onwards, the KeyError raised by this class is also a subclass of the BadRequest HTTP exception and will render a page for a 400 BAD REQUEST if caught in a catch-all for HTTP exceptions.
A MultiDict can be constructed from an iterable of (key, value) tuples, a dict, a MultiDict or from Werkzeug 0.2 onwards some keyword parameters.
参数: | mapping – the initial value for the MultiDict. Either a regular dict, an iterable of (key, value) tuples or None. |
---|
Adds a new value for the key.
0.6 新版功能.
参数: |
|
---|
Return a shallow copy of this object.
v defaults to None.
Return the default value if the requested data doesn’t exist. If type is provided and is a callable it should convert the value, return it or raise a ValueError if that is not possible. In this case the function will return the default as if the value was not found:
>>> d = TypeConversionDict(foo='42', bar='blub')
>>> d.get('foo', type=int)
42
>>> d.get('bar', -1, type=int)
-1
参数: |
|
---|
Return the list of items for a given key. If that key is not in the MultiDict, the return value will be an empty list. Just as get getlist accepts a type parameter. All items will be converted with the callable defined there.
参数: |
|
---|---|
返回: | a list of all the values for the key. |
Like iteritems(), but returns a list.
Return an iterator of (key, value) pairs.
参数: | multi – If set to True the iterator returned will have a pair for each value of each key. Otherwise it will only contain pairs for the first value of each key. |
---|
Return a list of (key, values) pairs, where values is the list of all values associated with the key.
Return an iterator of all values associated with a key. Zipping keys() and this is the same as calling lists():
>>> d = MultiDict({"foo": [1, 2, 3]})
>>> zip(d.keys(), d.listvalues()) == d.lists()
True
Returns an iterator of the first value on every key’s value list.
Like iterkeys(), but returns a list.
Like iterlists(), but returns a list.
Like iterlistvalues(), but returns a list.
Pop the first item for a list on the dict. Afterwards the key is removed from the dict, so additional values are discarded:
>>> d = MultiDict({"foo": [1, 2, 3]})
>>> d.pop("foo")
1
>>> "foo" in d
False
参数: |
|
---|
Pop an item from the dict.
Pop a (key, list) tuple from the dict.
Pop the list for a key from the dict. If the key is not in the dict an empty list is returned.
在 0.5 版更改: If the key does no longer exist a list is returned instead of raising an error.
Returns the value for the key if it is in the dict, otherwise it returns default and sets that value for key.
参数: |
|
---|
Remove the old values for a key and add new ones. Note that the list you pass the values in will be shallow-copied before it is inserted in the dictionary.
>>> d = MultiDict()
>>> d.setlist('foo', ['1', '2'])
>>> d['foo']
'1'
>>> d.getlist('foo')
['1', '2']
参数: |
|
---|
Like setdefault but sets multiple values. The list returned is not a copy, but the list that is actually used internally. This means that you can put new values into the dict by appending items to the list:
>>> d = MultiDict({"foo": 1})
>>> d.setlistdefault("foo").extend([2, 3])
>>> d.getlist("foo")
[1, 2, 3]
参数: |
|
---|---|
返回: | a list |
Return the contents as regular dict. If flat is True the returned dict will only have the first item present, if flat is False all values will be returned as lists.
参数: | flat – If set to False the dict returned will have lists with all the values in it. Otherwise it will only contain the first value for each key. |
---|---|
返回: | a dict |
update() extends rather than replaces existing key lists.
Like itervalues(), but returns a list.
Works like a regular MultiDict but preserves the order of the fields. To convert the ordered multi dict into a list you can use the items() method and pass it multi=True.
In general an OrderedMultiDict is an order of magnitude slower than a MultiDict.
note
Due to a limitation in Python you cannot convert an ordered multi dict into a regular dict by using dict(multidict). Instead you have to use the to_dict() method, otherwise the internal bucket objects are exposed.
An immutable OrderedMultiDict.
0.6 新版功能.
A read only MultiDict that you can pass multiple MultiDict instances as sequence and it will combine the return values of all wrapped dicts:
>>> from werkzeug.datastructures import CombinedMultiDict, MultiDict
>>> post = MultiDict([('foo', 'bar')])
>>> get = MultiDict([('blub', 'blah')])
>>> combined = CombinedMultiDict([get, post])
>>> combined['foo']
'bar'
>>> combined['blub']
'blah'
This works for all read operations and will raise a TypeError for methods that usually change data which isn’t possible.
From Werkzeug 0.3 onwards, the KeyError raised by this class is also a subclass of the BadRequest HTTP exception and will render a page for a 400 BAD REQUEST if caught in a catch-all for HTTP exceptions.
A special MultiDict that has convenience methods to add files to it. This is used for EnvironBuilder and generally useful for unittesting.
0.5 新版功能.
Adds a new file to the dict. file can be a file name or a file-like or a FileStorage object.
参数: |
|
---|
The FileStorage class is a thin wrapper over incoming files. It is used by the request object to represent uploaded files. All the attributes of the wrapper stream are proxied by the file storage so it’s possible to do storage.read() instead of the long form storage.stream.read().
The input stream for the uploaded file. This usually points to an open temporary file.
The filename of the file on the client.
The name of the form field.
The multipart headers as Headers object. This usually contains irrelevant information but in combination with custom multipart requests the raw headers might be interesting.
0.6 新版功能.
Close the underlying file if possible.
The content-length sent in the header. Usually not available
The content-type sent in the header. Usually not available
Like content_type but without parameters (eg, without charset, type etc.). For example if the content type is text/html; charset=utf-8 the mimetype would be 'text/html'.
0.7 新版功能.
The mimetype parameters as dict. For example if the content type is text/html; charset=utf-8 the params would be {'charset': 'utf-8'}.
0.7 新版功能.
Save the file to a destination path or file object. If the destination is a file object you have to close it yourself after the call. The buffer size is the number of bytes held in memory during the copy process. It defaults to 16KB.
For secure file saving also have a look at secure_filename().
参数: |
|
---|
Various utility functions shipped with Werkzeug.
Helper object for HTML generation.
Per default there are two instances of that class. The html one, and the xhtml one for those two dialects. The class uses keyword parameters and positional parameters to generate small snippets of HTML.
Keyword parameters are converted to XML/SGML attributes, positional arguments are used as children. Because Python accepts positional arguments before keyword arguments it’s a good idea to use a list with the star-syntax for some children:
>>> html.p(class_='foo', *[html.a('foo', href='foo.html'), ' ',
... html.a('bar', href='bar.html')])
u'<p class="foo"><a href="foo.html">foo</a> <a href="bar.html">bar</a></p>'
This class works around some browser limitations and can not be used for arbitrary SGML/XML generation. For that purpose lxml and similar libraries exist.
Calling the builder escapes the string passed:
>>> html.p(html("<foo>"))
u'<p><foo></p>'
Replace special characters “&”, “<”, “>” and (”) to HTML-safe sequences.
There is a special handling for None which escapes to an empty string.
在 0.9 版更改: quote is now implicitly on.
参数: |
|
---|
The reverse function of escape. This unescapes all the HTML entities, not only the XML entities inserted by escape.
参数: | s – the string to unescape. |
---|
A decorator that converts a function into a lazy property. The function wrapped is called the first time to retrieve the result and then that calculated result is used the next time you access the value:
class Foo(object):
@cached_property
def foo(self):
# calculate something important here
return 42
The class has to have a __dict__ in order for this property to work.
Maps request attributes to environment variables. This works not only for the Werzeug request object, but also any other class with an environ attribute:
>>> class Test(object):
... environ = {'key': 'value'}
... test = environ_property('key')
>>> var = Test()
>>> var.test
'value'
If you pass it a second value it’s used as default if the key does not exist, the third one can be a converter that takes a value and converts it. If it raises ValueError or TypeError the default value is used. If no default value is provided None is used.
Per default the property is read only. You have to explicitly enable it by passing read_only=False to the constructor.
Like environ_property but for headers.
Parse a cookie. Either from a string or WSGI environ.
Per default encoding errors are ignored. If you want a different behavior you can set errors to 'replace' or 'strict'. In strict mode a HTTPUnicodeError is raised.
在 0.5 版更改: This function now returns a TypeConversionDict instead of a regular dict. The cls parameter was added.
参数: |
|
---|
Creates a new Set-Cookie header without the Set-Cookie prefix The parameters are the same as in the cookie Morsel object in the Python standard library but it accepts unicode data, too.
On Python 3 the return value of this function will be a unicode string, on Python 2 it will be a native string. In both cases the return value is usually restricted to ascii as the vast majority of values are properly escaped, but that is no guarantee. If a unicode string is returned it’s tunneled through latin1 as required by PEP 3333.
The return value is not ASCII safe if the key contains unicode characters. This is technically against the specification but happens in the wild. It’s strongly recommended to not use non-ASCII values for the keys.
参数: |
|
---|
Return a response object (a WSGI application) that, if called, redirects the client to the target location. Supported codes are 301, 302, 303, 305, and 307. 300 is not supported because it’s not a real redirect and 304 because it’s the answer for a request with a request with defined If-Modified-Since headers.
0.6 新版功能: The location can now be a unicode string that is encoded using the iri_to_uri() function.
参数: |
|
---|
Redirect to the same URL but with a slash appended. The behavior of this function is undefined if the path ends with a slash already.
参数: |
|
---|
Imports an object based on a string. This is useful if you want to use import paths as endpoints or something similar. An import path can be specified either in dotted notation (xml.sax.saxutils.escape) or with a colon as object delimiter (xml.sax.saxutils:escape).
If silent is True the return value will be None if the import fails.
参数: |
|
---|---|
返回: | imported object |
Find all the modules below a package. This can be useful to automatically import all views / controllers so that their metaclasses / function decorators have a chance to register themselves on the application.
Packages are not returned unless include_packages is True. This can also recursively list modules but in that case it will import all the packages to get the correct load path of that module.
参数: |
|
---|---|
返回: | generator |
Check if the function accepts the arguments and keyword arguments. Returns a new (args, kwargs) tuple that can safely be passed to the function without causing a TypeError because the function signature is incompatible. If drop_extra is set to True (which is the default) any extra positional or keyword arguments are dropped automatically.
The exception raised provides three attributes:
This can be useful for decorators that forward user submitted data to a view function:
from werkzeug.utils import ArgumentValidationError, validate_arguments
def sanitize(f):
def proxy(request):
data = request.values.to_dict()
try:
args, kwargs = validate_arguments(f, (request,), data)
except ArgumentValidationError:
raise BadRequest('The browser failed to transmit all '
'the data expected.')
return f(*args, **kwargs)
return proxy
参数: |
|
---|---|
返回: | tuple in the form (args, kwargs). |
Pass it a filename and it will return a secure version of it. This filename can then safely be stored on a regular file system and passed to os.path.join(). The filename returned is an ASCII only string for maximum portability.
On windows system the function also makes sure that the file is not named after one of the special device files.
>>> secure_filename("My cool movie.mov")
'My_cool_movie.mov'
>>> secure_filename("../../../etc/passwd")
'etc_passwd'
>>> secure_filename(u'i contain cool \xfcml\xe4uts.txt')
'i_contain_cool_umlauts.txt'
The function might return an empty filename. It’s your responsibility to ensure that the filename is unique and that you generate random filename if the function returned an empty one.
0.5 新版功能.
参数: | filename – the filename to secure |
---|
Bind the arguments provided into a dict. When passed a function, a tuple of arguments and a dict of keyword arguments bind_arguments returns a dict of names as the function would see it. This can be useful to implement a cache decorator that uses the function arguments to build the cache key based on the values of the arguments.
参数: |
|
---|---|
返回: | a dict of bound keyword arguments. |
Implements a callable that constructs URLs with the given base. The function can be called with any number of positional and keyword arguments which than are used to assemble the URL. Works with URLs and posix paths.
Positional arguments are appended as individual segments to the path of the URL:
>>> href = Href('/foo')
>>> href('bar', 23)
'/foo/bar/23'
>>> href('foo', bar=23)
'/foo/foo?bar=23'
If any of the arguments (positional or keyword) evaluates to None it will be skipped. If no keyword arguments are given the last argument can be a dict or MultiDict (or any other dict subclass), otherwise the keyword arguments are used for the query parameters, cutting off the first trailing underscore of the parameter name:
>>> href(is_=42)
'/foo?is=42'
>>> href({'foo': 'bar'})
'/foo?foo=bar'
Combining of both methods is not allowed:
>>> href({'foo': 'bar'}, bar=42)
Traceback (most recent call last):
...
TypeError: keyword arguments and query-dicts can't be combined
Accessing attributes on the href object creates a new href object with the attribute name as prefix:
>>> bar_href = href.bar
>>> bar_href("blub")
'/foo/bar/blub'
If sort is set to True the items are sorted by key or the default sorting algorithm:
>>> href = Href("/", sort=True)
>>> href(a=1, b=2, c=3)
'/?a=1&b=2&c=3'
0.5 新版功能: sort and key were added.
Parse a querystring and return it as MultiDict. There is a difference in key decoding on different Python versions. On Python 3 keys will always be fully decoded whereas on Python 2, keys will remain bytestrings if they fit into ASCII. On 2.x keys can be forced to be unicode by setting decode_keys to True.
If the charset is set to None no unicode decoding will happen and raw bytes will be returned.
Per default a missing value for a key will default to an empty key. If you don’t want that behavior you can set include_empty to False.
Per default encoding errors are ignored. If you want a different behavior you can set errors to 'replace' or 'strict'. In strict mode a HTTPUnicodeError is raised.
在 0.5 版更改: In previous versions ”;” and “&” could be used for url decoding. This changed in 0.5 where only “&” is supported. If you want to use ”;” instead a different separator can be provided.
The cls parameter was added.
参数: |
|
---|
Works like url_decode() but decodes a stream. The behavior of stream and limit follows functions like make_line_iter(). The generator of pairs is directly fed to the cls so you can consume the data while it’s parsed.
0.8 新版功能.
参数: |
|
---|
URL encode a dict/MultiDict. If a value is None it will not appear in the result string. Per default only values are encoded into the target charset strings. If encode_keys is set to True unicode keys are supported too.
If sort is set to True the items are sorted by key or the default sorting algorithm.
0.5 新版功能: sort, key, and separator were added.
参数: |
|
---|
Like url_encode() but writes the results to a stream object. If the stream is None a generator over all encoded pairs is returned.
0.8 新版功能.
参数: |
|
---|
URL encode a single string with a given encoding.
参数: |
|
---|
0.9.2 新版功能: The unsafe parameter was added.
URL encode a single string with the given encoding and convert whitespace to “+”.
参数: |
|
---|
URL decode a single string with a given encoding. If the charset is set to None no unicode decoding is performed and raw bytes are returned.
参数: |
|
---|
URL decode a single string with the given charset and decode “+” to whitespace.
Per default encoding errors are ignored. If you want a different behavior you can set errors to 'replace' or 'strict'. In strict mode a HTTPUnicodeError is raised.
参数: |
|
---|
Sometimes you get an URL by a user that just isn’t a real URL because it contains unsafe characters like ‘ ‘ and so on. This function can fix some of the problems in a similar way browsers handle data entered by the user:
>>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)')
'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)'
参数: |
|
---|
Converts a URI in a given charset to a IRI.
Examples for URI versus IRI:
>>> uri_to_iri(b'http://xn--n3h.net/')
u'http://\u2603.net/'
>>> uri_to_iri(b'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th')
u'http://\xfcser:p\xe4ssword@\u2603.net/p\xe5th'
Query strings are left unchanged:
>>> uri_to_iri('/?foo=24&x=%26%2f')
u'/?foo=24&x=%26%2f'
0.6 新版功能.
参数: |
|
---|
Converts any unicode based IRI to an acceptable ASCII URI. Werkzeug always uses utf-8 URLs internally because this is what browsers and HTTP do as well. In some places where it accepts an URL it also accepts a unicode IRI and converts it into a URI.
Examples for IRI versus URI:
>>> iri_to_uri(u'http://☃.net/')
'http://xn--n3h.net/'
>>> iri_to_uri(u'http://üser:pässword@☃.net/påth')
'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th'
0.6 新版功能.
参数: |
|
---|
Represents a user agent. Pass it a WSGI environment or a user agent string and you can inspect some of the details from the user agent string via the attributes. The following attributes exist:
the raw user agent string
the browser platform. The following platforms are currently recognized:
the name of the browser. The following browsers are currently recognized:
(Browsers maked with a star (*) are crawlers.)
the version of the browser
the language of the browser
0.6.1 新版功能.
Hash a password with the given method and salt with with a string of the given length. The format of the string returned includes the method that was used so that check_password_hash() can check the hash.
The format for the hashed string looks like this:
method$salt$hash
This method can not generate unsalted passwords but it is possible to set the method to plain to enforce plaintext passwords. If a salt is used, hmac is used internally to salt the password.
If PBKDF2 is wanted it can be enabled by setting the method to pbkdf2:method:iterations where iterations is optional:
pbkdf2:sha1:2000$salt$hash
pbkdf2:sha1$salt$hash
参数: |
|
---|
check a password against a given salted and hashed password value. In order to support unsalted legacy passwords this method supports plain text passwords, md5 and sha1 hashes (both salted and unsalted).
Returns True if the password matched, False otherwise.
参数: |
|
---|
This function compares strings in somewhat constant time. This requires that the length of at least one string is known in advance.
Returns True if the two strings are equal or False if they are not.
0.7 新版功能.
Safely join directory and filename. If this cannot be done, this function returns None.
参数: |
|
---|
Like pbkdf2_bin() but returns a hex encoded string.
0.9 新版功能.
参数: |
|
---|
Returns a binary digest for the PBKDF2 hash algorithm of data with the given salt. It iterates iterations time and produces a key of keylen bytes. By default SHA-1 is used as hash function, a different hashlib hashfunc can be provided.
0.9 新版功能.
参数: |
|
---|
Sooner or later you have some things you want to have in every single view or helper function or whatever. In PHP the way to go are global variables. However, that isn’t possible in WSGI applications without a major drawback: As soon as you operate on the global namespace your application isn’t thread-safe any longer.
The Python standard library comes with a utility called “thread locals”. A thread local is a global object in which you can put stuff in and get back later in a thread-safe way. That means whenever you set or get an object on a thread local object, the thread local object checks in which thread you are and retrieves the correct value.
This, however, has a few disadvantages. For example, besides threads there are other ways to handle concurrency in Python. A very popular approach is greenlets. Also, whether every request gets its own thread is not guaranteed in WSGI. It could be that a request is reusing a thread from before, and hence data is left in the thread local object.
Here’s a simple example of how one could use werkzeug.local:
from werkzeug.local import Local, LocalManager
local = Local()
local_manager = LocalManager([local])
def application(environ, start_response):
local.request = request = Request(environ)
...
application = local_manager.make_middleware(application)
This binds the request to local.request. Every other piece of code executed after this assignment in the same context can safely access local.request and will get the same request object. The make_middleware method on the local manager ensures that all references to the local objects are cleared up after the request.
The same context means the same greenlet (if you’re using greenlets) in the same thread and same process.
If a request object is not yet set on the local object and you try to access it, you will get an AttributeError. You can use getattr to avoid that:
def get_request():
return getattr(local, 'request', None)
This will try to get the request or return None if the request is not (yet?) available.
Note that local objects cannot manage themselves, for that you need a local manager. You can pass a local manager multiple locals or add additionals later by appending them to manager.locals and everytime the manager cleans up it will clean up all the data left in the locals for this context.
Releases the contents of the local for the current context. This makes it possible to use locals without a manager.
Example:
>>> loc = Local()
>>> loc.foo = 42
>>> release_local(loc)
>>> hasattr(loc, 'foo')
False
With this function one can release Local objects as well as LocalStack objects. However it is not possible to release data held by proxies that way, one always has to retain a reference to the underlying local object in order to be able to release it.
0.6.1 新版功能.
Local objects cannot manage themselves. For that you need a local manager. You can pass a local manager multiple locals or add them later by appending them to manager.locals. Everytime the manager cleans up it, will clean up all the data left in the locals for this context.
The ident_func parameter can be added to override the default ident function for the wrapped locals.
在 0.6.1 版更改: Instead of a manager the release_local() function can be used as well.
在 0.7 版更改: ident_func was added.
Manually clean up the data in the locals for this context. Call this at the end of the request or use make_middleware().
Return the context identifier the local objects use internally for this context. You cannot override this method to change the behavior but use it to link other context local objects (such as SQLAlchemy’s scoped sessions) to the Werkzeug locals.
在 0.7 版更改: Yu can pass a different ident function to the local manager that will then be propagated to all the locals passed to the constructor.
Wrap a WSGI application so that cleaning up happens after request end.
Like make_middleware but for decorating functions.
Example usage:
@manager.middleware
def application(environ, start_response):
...
The difference to make_middleware is that the function passed will have all the arguments copied from the inner application (name, docstring, module).
This class works similar to a Local but keeps a stack of objects instead. This is best explained with an example:
>>> ls = LocalStack()
>>> ls.push(42)
>>> ls.top
42
>>> ls.push(23)
>>> ls.top
23
>>> ls.pop()
23
>>> ls.top
42
They can be force released by using a LocalManager or with the release_local() function but the correct way is to pop the item from the stack after using. When the stack is empty it will no longer be bound to the current context (and as such released).
By calling the stack without arguments it returns a proxy that resolves to the topmost item on the stack.
0.6.1 新版功能.
Removes the topmost item from the stack, will return the old value or None if the stack was already empty.
Pushes a new item to the stack
The topmost item on the stack. If the stack is empty, None is returned.
Acts as a proxy for a werkzeug local. Forwards all operations to a proxied object. The only operations not supported for forwarding are right handed operands and any kind of assignment.
Example usage:
from werkzeug.local import Local
l = Local()
# these are proxies
request = l('request')
user = l('user')
from werkzeug.local import LocalStack
_response_local = LocalStack()
# this is a proxy
response = _response_local()
Whenever something is bound to l.user / l.request the proxy objects will forward all operations. If no object is bound a RuntimeError will be raised.
To create proxies to Local or LocalStack objects, call the object as shown above. If you want to have a proxy to an object looked up by a function, you can (as of Werkzeug 0.6.1) pass a function to the LocalProxy constructor:
session = LocalProxy(lambda: get_current_request().session)
在 0.6.1 版更改: The class can be instanciated with a callable as well now.
Keep in mind that repr() is also forwarded, so if you want to find out if you are dealing with a proxy you can do an isinstance() check:
>>> from werkzeug.local import LocalProxy
>>> isinstance(request, LocalProxy)
True
You can also create proxy objects by hand:
from werkzeug.local import Local, LocalProxy
local = Local()
request = LocalProxy(local, 'request')
Return the current object. This is useful if you want the real object behind the proxy at a time for performance reasons or because you want to pass the object into a different context.
Middlewares wrap applications to dispatch between then or provide additional request handling. Additionally to the middlewares documented here, there is also the DebuggedApplication class that is implemented as a WSGI middleware.
A WSGI middleware that provides static content for development environments or simple server setups. Usage is quite simple:
import os
from werkzeug.wsgi import SharedDataMiddleware
app = SharedDataMiddleware(app, {
'/shared': os.path.join(os.path.dirname(__file__), 'shared')
})
The contents of the folder ./shared will now be available on http://example.com/shared/. This is pretty useful during development because a standalone media server is not required. One can also mount files on the root folder and still continue to use the application because the shared data middleware forwards all unhandled requests to the application, even if the requests are below one of the shared folders.
If pkg_resources is available you can also tell the middleware to serve files from package data:
app = SharedDataMiddleware(app, {
'/shared': ('myapplication', 'shared_files')
})
This will then serve the shared_files folder in the myapplication Python package.
The optional disallow parameter can be a list of fnmatch() rules for files that are not accessible from the web. If cache is set to False no caching headers are sent.
Currently the middleware does not support non ASCII filenames. If the encoding on the file system happens to be the encoding of the URI it may work but this could also be by accident. We strongly suggest using ASCII only file names for static files.
The middleware will guess the mimetype using the Python mimetype module. If it’s unable to figure out the charset it will fall back to fallback_mimetype.
在 0.5 版更改: The cache timeout is configurable now.
0.6 新版功能: The fallback_mimetype parameter was added.
参数: |
|
---|---|
Param cache_timeout: | |
the cache timeout in seconds for the headers. |
Subclasses can override this method to disallow the access to certain files. However by providing disallow in the constructor this method is overwritten.
Allows one to mount middlewares or applications in a WSGI application. This is useful if you want to combine multiple WSGI applications:
app = DispatcherMiddleware(app, {
'/app2': app2,
'/app3': app3
})
Also there’s the …
Like the name says. But who knows how it works?
This module implements a number of Python exceptions you can raise from within your views to trigger a standard non-200 response.
from werkzeug.wrappers import BaseRequest
from werkzeug.wsgi import responder
from werkzeug.exceptions import HTTPException, NotFound
def view(request):
raise NotFound()
@responder
def application(environ, start_response):
request = BaseRequest(environ)
try:
return view(request)
except HTTPException as e:
return e
As you can see from this example those exceptions are callable WSGI applications. Because of Python 2.4 compatibility those do not extend from the response objects but only from the python exception class.
As a matter of fact they are not Werkzeug response objects. However you can get a response object by calling get_response() on a HTTP exception.
Keep in mind that you have to pass an environment to get_response() because some errors fetch additional information from the WSGI environment.
If you want to hook in a different exception page to say, a 404 status code, you can add a second except for a specific subclass of an error:
@responder
def application(environ, start_response):
request = BaseRequest(environ)
try:
return view(request)
except NotFound, e:
return not_found(request)
except HTTPException, e:
return e
The following error classes exist in Werkzeug:
400 Bad Request
Raise if the browser sends something to the application the application or server cannot handle.
401 Unauthorized
Raise if the user is not authorized. Also used if you want to use HTTP basic auth.
403 Forbidden
Raise if the user doesn’t have the permission for the requested resource but was authenticated.
404 Not Found
Raise if a resource does not exist and never existed.
405 Method Not Allowed
Raise if the server used a method the resource does not handle. For example POST if the resource is view only. Especially useful for REST.
The first argument for this exception should be a list of allowed methods. Strictly speaking the response would be invalid if you don’t provide valid methods in the header which you can do with that list.
406 Not Acceptable
Raise if the server can’t return any content conforming to the Accept headers of the client.
408 Request Timeout
Raise to signalize a timeout.
409 Conflict
Raise to signal that a request cannot be completed because it conflicts with the current state on the server.
0.7 新版功能.
410 Gone
Raise if a resource existed previously and went away without new location.
411 Length Required
Raise if the browser submitted data but no Content-Length header which is required for the kind of processing the server does.
412 Precondition Failed
Status code used in combination with If-Match, If-None-Match, or If-Unmodified-Since.
413 Request Entity Too Large
The status code one should return if the data submitted exceeded a given limit.
414 Request URI Too Large
Like 413 but for too long URLs.
415 Unsupported Media Type
The status code returned if the server is unable to handle the media type the client transmitted.
416 Requested Range Not Satisfiable
The client asked for a part of the file that lies beyond the end of the file.
0.7 新版功能.
417 Expectation Failed
The server cannot meet the requirements of the Expect request-header.
0.7 新版功能.
418 I’m a teapot
The server should return this if it is a teapot and someone attempted to brew coffee with it.
0.7 新版功能.
428 Precondition Required
The server requires this request to be conditional, typically to prevent the lost update problem, which is a race condition between two or more clients attempting to update a resource through PUT or DELETE. By requiring each client to include a conditional header (“If-Match” or “If-Unmodified- Since”) with the proper value retained from a recent GET request, the server ensures that each client has at least seen the previous revision of the resource.
429 Too Many Requests
The server is limiting the rate at which this user receives responses, and this request exceeds that rate. (The server may use any convenient method to identify users and their request rates). The server may include a “Retry-After” header to indicate how long the user should wait before retrying.
431 Request Header Fields Too Large
The server refuses to process the request because the header fields are too large. One or more individual fields may be too large, or the set of all headers is too large.
500 Internal Server Error
Raise if an internal server error occurred. This is a good fallback if an unknown error occurred in the dispatcher.
501 Not Implemented
Raise if the application does not support the action requested by the browser.
502 Bad Gateway
If you do proxying in your application you should return this status code if you received an invalid response from the upstream server it accessed in attempting to fulfill the request.
503 Service Unavailable
Status code you should return if a service is temporarily unavailable.
This exception is used to signal unicode decode errors of request data. For more information see the Unicode chapter.
Internal exception that is raised if Werkzeug detects a disconnected client. Since the client is already gone at that point attempting to send the error message to the client might not work and might ultimately result in another exception in the server. Mainly this is here so that it is silenced by default as far as Werkzeug is concerned.
Since disconnections cannot be reliably detected and are unspecified by WSGI to a large extend this might or might not be raised if a client is gone.
0.8 新版功能.
Raised if something triggers a security error. This is otherwise exactly like a bad request error.
0.9 新版功能.
All the exceptions implement this common interface:
Baseclass for all HTTP exceptions. This exception can be called as WSGI application to render a default error page or you can catch the subclasses of it independently and render nicer error messages.
Call the exception as WSGI application.
参数: |
|
---|
Get a response object. If one was passed to the exception it’s returned directly.
参数: | environ – the optional environ for the request. This can be used to modify the response depending on how the request looked like. |
---|---|
返回: | a Response object or a subclass thereof. |
Starting with Werkzeug 0.3 some of the builtin classes raise exceptions that look like regular python exceptions (eg KeyError) but are BadRequest HTTP exceptions at the same time. This decision was made to simplify a common pattern where you want to abort if the client tampered with the submitted form data in a way that the application can’t recover properly and should abort with 400 BAD REQUEST.
Assuming the application catches all HTTP exceptions and reacts to them properly a view function could do the following savely and doesn’t have to check if the keys exist:
def new_post(request):
post = Post(title=request.form['title'], body=request.form['body'])
post.save()
return redirect(post.url)
If title or body are missing in the form a special key error will be raised which behaves like a KeyError but also a BadRequest exception.
Sometimes it’s convenient to just raise an exception by the error code, without importing the exception and looking up the name etc. For this purpose there is the abort() function.
It can be passed a WSGI application or a status code. If a status code is given it’s looked up in the list of exceptions from above and will raise that exception, if passed a WSGI application it will wrap it in a proxy WSGI exception and raise that:
abort(404)
abort(Response('Hello World'))
If you want to use this functionality with custom exceptions you can create an instance of the aborter class:
When passed a dict of code -> exception items it can be used as callable that raises exceptions. If the first argument to the callable is an integer it will be looked up in the mapping, if it’s a WSGI application it will be raised in a proxy exception.
The rest of the arguments are forwarded to the exception constructor.
As you can see from the list above not all status codes are available as errors. Especially redirects and ather non 200 status codes that represent do not represent errors are missing. For redirects you can use the redirect() function from the utilities.
If you want to add an error yourself you can subclass HTTPException:
from werkzeug.exceptions import HTTPException
class PaymentRequired(HTTPException):
code = 402
description = '<p>Payment required.</p>'
This is the minimal code you need for your own exception. If you want to add more logic to the errors you can override the get_description(), get_body(), get_headers() and get_response() methods. In any case you should have a look at the sourcecode of the exceptions module.
You can override the default description in the constructor with the description parameter (it’s the first argument for all exceptions except of the MethodNotAllowed which accepts a list of allowed methods as first argument):
raise BadRequest('Request failed because X was not present')
这部分内容包含如何在生产环境下部署你的应用到一个服务器,比如Apache或lighttpd。
This section covers running your application in production on a web server such as Apache or lighttpd.
If all other deployment methods do not work, CGI will work for sure. CGI is supported by all major servers but usually has a less-than-optimal performance.
This is also the way you can use a Werkzeug application on Google’s AppEngine, there however the execution does happen in a CGI-like environment. The application’s performance is unaffected because of that.
First you need to create the CGI application file. Let’s call it yourapplication.cgi:
#!/usr/bin/python
from wsgiref.handlers import CGIHandler
from yourapplication import make_app
application = make_app()
CGIHandler().run(application)
If you’re running Python 2.4 you will need the wsgiref package. Python 2.5 and higher ship this as part of the standard library.
Usually there are two ways to configure the server. Either just copy the .cgi into a cgi-bin (and use mod_rerwite or something similar to rewrite the URL) or let the server point to the file directly.
In Apache for example you can put a like like this into the config:
ScriptAlias /app /path/to/the/application.cgi
For more information consult the documentation of your webserver.
If you are using the Apache webserver you should consider using mod_wsgi.
If you don’t have mod_wsgi installed yet you have to either install it using a package manager or compile it yourself.
The mod_wsgi installation instructions cover installation instructions for source installations on UNIX systems.
If you are using ubuntu / debian you can apt-get it and activate it as follows:
# apt-get install libapache2-mod-wsgi
On FreeBSD install mod_wsgi by compiling the www/mod_wsgi port or by using pkg_add:
# pkg_add -r mod_wsgi
If you are using pkgsrc you can install mod_wsgi by compiling the www/ap2-wsgi package.
If you encounter segfaulting child processes after the first apache reload you can safely ignore them. Just restart the server.
To run your application you need a yourapplication.wsgi file. This file contains the code mod_wsgi is executing on startup to get the application object. The object called application in that file is then used as application.
For most applications the following file should be sufficient:
from yourapplication import make_app
application = make_app()
If you don’t have a factory function for application creation but a singleton instance you can directly import that one as application.
Store that file somewhere where you will find it again (eg: /var/www/yourapplication) and make sure that yourapplication and all the libraries that are in use are on the python load path. If you don’t want to install it system wide consider using a virtual python instance.
The last thing you have to do is to create an Apache configuration file for your application. In this example we are telling mod_wsgi to execute the application under a different user for security reasons:
<VirtualHost *>
ServerName example.com
WSGIDaemonProcess yourapplication user=user1 group=group1 processes=2 threads=5
WSGIScriptAlias / /var/www/yourapplication/yourapplication.wsgi
<Directory /var/www/yourapplication>
WSGIProcessGroup yourapplication
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
For more information consult the mod_wsgi wiki.
A very popular deployment setup on servers like lighttpd and nginx is FastCGI. To use your WSGI application with any of them you will need a FastCGI server first.
The most popular one is flup which we will use for this guide. Make sure to have it installed.
First you need to create the FastCGI server file. Let’s call it yourapplication.fcgi:
#!/usr/bin/python
from flup.server.fcgi import WSGIServer
from yourapplication import make_app
if __name__ == '__main__':
application = make_app()
WSGIServer(application).run()
This is enough for Apache to work, however ngingx and older versions of lighttpd need a socket to be explicitly passed to communicate with the FastCGI server. For that to work you need to pass the path to the socket to the WSGIServer:
WSGIServer(application, bindAddress='/path/to/fcgi.sock').run()
The path has to be the exact same path you define in the server config.
Save the yourapplication.fcgi file somewhere you will find it again. It makes sense to have that in /var/www/yourapplication or something similar.
Make sure to set the executable bit on that file so that the servers can execute it:
# chmod +x /var/www/yourapplication/yourapplication.fcgi
A basic FastCGI configuration for lighttpd looks like this:
fastcgi.server = ("/yourapplication.fcgi" =>
((
"socket" => "/tmp/yourapplication-fcgi.sock",
"bin-path" => "/var/www/yourapplication/yourapplication.fcgi",
"check-local" => "disable",
"max-procs" -> 1
))
)
alias.url = (
"/static/" => "/path/to/your/static"
)
url.rewrite-once = (
"^(/static.*)$" => "$1",
"^(/.*)$" => "/yourapplication.fcgi$1"
Remember to enable the FastCGI, alias and rewrite modules. This configuration binds the application to /yourapplication. If you want the application to work in the URL root you have to work around a lighttpd bug with the LighttpdCGIRootFix middleware.
Make sure to apply it only if you are mounting the application the URL root. Also, see the Lighty docs for more information on FastCGI and Python (note that explicitly passing a socket to run() is no longer necessary).
Installing FastCGI applications on nginx is a bit tricky because by default some FastCGI parameters are not properly forwarded.
A basic FastCGI configuration for nginx looks like this:
location /yourapplication/ {
include fastcgi_params;
if ($uri ~ ^/yourapplication/(.*)?) {
set $path_url $1;
}
fastcgi_param PATH_INFO $path_url;
fastcgi_param SCRIPT_NAME /yourapplication;
fastcgi_pass unix:/tmp/yourapplication-fcgi.sock;
}
This configuration binds the application to /yourapplication. If you want to have it in the URL root it’s a bit easier because you don’t have to figure out how to calculate PATH_INFO and SCRIPT_NAME:
location /yourapplication/ {
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param SCRIPT_NAME "";
fastcgi_pass unix:/tmp/yourapplication-fcgi.sock;
}
Since Nginx doesn’t load FastCGI apps, you have to do it by yourself. You can either write an init.d script for that or execute it inside a screen session:
$ screen
$ /var/www/yourapplication/yourapplication.fcgi
FastCGI deployments tend to be hard to debug on most webservers. Very often the only thing the server log tells you is something along the lines of “premature end of headers”. In order to debug the application the only thing that can really give you ideas why it breaks is switching to the correct user and executing the application by hand.
This example assumes your application is called application.fcgi and that your webserver user is www-data:
$ su www-data
$ cd /var/www/yourapplication
$ python application.fcgi
Traceback (most recent call last):
File "yourapplication.fcg", line 4, in <module>
ImportError: No module named yourapplication
In this case the error seems to be “yourapplication” not being on the python path. Common problems are:
Many people prefer using a standalone Python HTTP server and proxying that server via nginx, Apache etc.
A very stable Python server is CherryPy. This part of the documentation shows you how to combine your WSGI application with the CherryPy WSGI server and how to configure the webserver for proxying.
To run your application you need a start-server.py file that starts up the WSGI Server.
It looks something along these lines:
from cherrypy import wsgiserver
from yourapplication import make_app
server = wsgiserver.CherryPyWSGIServer(('localhost', 8080), make_app())
try:
server.start()
except KeyboardInterrupt:
server.stop()
If you now start the file the server will listen on localhost:8080. Keep in mind that WSGI applications behave slightly different for proxied setups. If you have not developed your application for proxying in mind, you can apply the ProxyFix middleware.
As an example we show here how to configure nginx to proxy to the server.
The basic nginx configuration looks like this:
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080;
proxy_redirect default;
}
Since Nginx doesn’t start your server for you, you have to do it by yourself. You can either write an init.d script for that or execute it inside a screen session:
$ screen
$ python start-server.py
社区贡献的很多有用代码都将Werkzeug作为模块的一部分。
A lot of useful code contributed by the community is shipped with Werkzeug as part of the contrib module:
This module provides a class called AtomFeed which can be used to generate feeds in the Atom syndication format (see RFC 4287).
Example:
def atom_feed(request):
feed = AtomFeed("My Blog", feed_url=request.url,
url=request.host_url,
subtitle="My example blog for a feed test.")
for post in Post.query.limit(10).all():
feed.add(post.title, post.body, content_type='html',
author=post.author, url=post.url, id=post.uid,
updated=post.last_update, published=post.pub_date)
return feed.get_response()
A helper class that creates Atom feeds.
参数: |
|
---|
For more information on the elements see http://www.atomenabled.org/developers/syndication/
Everywhere where a list is demanded, any iterable can be used.
Add a new entry to the feed. This function can either be called with a FeedEntry or some keyword and positional arguments that are forwarded to the FeedEntry constructor.
Return a generator that yields pieces of XML.
Return a response object for the feed.
Convert the feed into a string.
Represents a single entry in a feed.
参数: |
|
---|
For more information on the elements see http://www.atomenabled.org/developers/syndication/
Everywhere where a list is demanded, any iterable can be used.
This module contains some helper classes that help one to add session support to a python WSGI application. For full client-side session storage see securecookie which implements a secure, client-side session storage.
from werkzeug.contrib.sessions import SessionMiddleware, \
FilesystemSessionStore
app = SessionMiddleware(app, FilesystemSessionStore())
The current session will then appear in the WSGI environment as werkzeug.session. However it’s recommended to not use the middleware but the stores directly in the application. However for very simple scripts a middleware for sessions could be sufficient.
This module does not implement methods or ways to check if a session is expired. That should be done by a cronjob and storage specific. For example to prune unused filesystem sessions one could check the modified time of the files. It sessions are stored in the database the new() method should add an expiration timestamp for the session.
For better flexibility it’s recommended to not use the middleware but the store and session object directly in the application dispatching:
session_store = FilesystemSessionStore()
def application(environ, start_response):
request = Request(environ)
sid = request.cookies.get('cookie_name')
if sid is None:
request.session = session_store.new()
else:
request.session = session_store.get(sid)
response = get_the_response_object(request)
if request.session.should_save:
session_store.save(request.session)
response.set_cookie('cookie_name', request.session.sid)
return response(environ, start_response)
Subclass of a dict that keeps track of direct object changes. Changes in mutable structures are not tracked, for those you have to set modified to True by hand.
The session ID as string.
True is the cookie was newly created, otherwise False
Whenever an item on the cookie is set, this attribute is set to True. However this does not track modifications inside mutable objects in the session:
>>> c = Session({}, sid='deadbeefbabe2c00ffee')
>>> c["foo"] = [1, 2, 3]
>>> c.modified
True
>>> c.modified = False
>>> c["foo"].append(4)
>>> c.modified
False
In that situation it has to be set to modified by hand so that should_save can pick it up.
True if the session should be saved.
在 0.6 版更改: By default the session is now only saved if the session is modified, not if it is new like it was before.
Baseclass for all session stores. The Werkzeug contrib module does not implement any useful stores besides the filesystem store, application developers are encouraged to create their own stores.
参数: | session_class – The session class to use. Defaults to Session. |
---|
Delete a session.
Simple function that generates a new session key.
Get a session for this sid or a new session object. This method has to check if the session key is valid and create a new session if that wasn’t the case.
Check if a key has the correct format.
Generate a new session.
Save a session.
Save if a session class wants an update.
Simple example session store that saves sessions on the filesystem. This store works best on POSIX systems and Windows Vista / Windows Server 2008 and newer.
在 0.6 版更改: renew_missing was added. Previously this was considered True, now the default changed to False and it can be explicitly deactivated.
参数: |
|
---|
Lists all sessions in the store.
0.6 新版功能.
A simple middleware that puts the session object of a store provided into the WSGI environ. It automatically sets cookies and restores sessions.
However a middleware is not the preferred solution because it won’t be as fast as sessions managed by the application itself and will put a key into the WSGI environment only relevant for the application which is against the concept of WSGI.
The cookie parameters are the same as for the dump_cookie() function just prefixed with cookie_. Additionally max_age is called cookie_age and not cookie_max_age because of backwards compatibility.
This module implements a cookie that is not alterable from the client because it adds a checksum the server checks for. You can use it as session replacement if all you have is a user id or something to mark a logged in user.
Keep in mind that the data is still readable from the client as a normal cookie is. However you don’t have to store and flush the sessions you have at the server.
Example usage:
>>> from werkzeug.contrib.securecookie import SecureCookie
>>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef")
Dumping into a string so that one can store it in a cookie:
>>> value = x.serialize()
Loading from that string again:
>>> x = SecureCookie.unserialize(value, "deadbeef")
>>> x["baz"]
(1, 2, 3)
If someone modifies the cookie and the checksum is wrong the unserialize method will fail silently and return a new empty SecureCookie object.
Keep in mind that the values will be visible in the cookie so do not store data in a cookie you don’t want the user to see.
If you are using the werkzeug request objects you could integrate the secure cookie into your application like this:
from werkzeug.utils import cached_property
from werkzeug.wrappers import BaseRequest
from werkzeug.contrib.securecookie import SecureCookie
# don't use this key but a different one; you could just use
# os.urandom(20) to get something random
SECRET_KEY = '\xfa\xdd\xb8z\xae\xe0}4\x8b\xea'
class Request(BaseRequest):
@cached_property
def client_session(self):
data = self.cookies.get('session_data')
if not data:
return SecureCookie(secret_key=SECRET_KEY)
return SecureCookie.unserialize(data, SECRET_KEY)
def application(environ, start_response):
request = Request(environ, start_response)
# get a response object here
response = ...
if request.client_session.should_save:
session_data = request.client_session.serialize()
response.set_cookie('session_data', session_data,
httponly=True)
return response(environ, start_response)
A less verbose integration can be achieved by using shorthand methods:
class Request(BaseRequest):
@cached_property
def client_session(self):
return SecureCookie.load_cookie(self, secret_key=COOKIE_SECRET)
def application(environ, start_response):
request = Request(environ, start_response)
# get a response object here
response = ...
request.client_session.save_cookie(response)
return response(environ, start_response)
The default implementation uses Pickle as this is the only module that used to be available in the standard library when this module was created. If you have simplejson available it’s strongly recommended to create a subclass and replace the serialization method:
import json
from werkzeug.contrib.securecookie import SecureCookie
class JSONSecureCookie(SecureCookie):
serialization_method = json
The weakness of Pickle is that if someone gains access to the secret key the attacker can not only modify the session but also execute arbitrary code on the server.
Represents a secure cookie. You can subclass this class and provide an alternative mac method. The import thing is that the mac method is a function with a similar interface to the hashlib. Required methods are update() and digest().
Example usage:
>>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef")
>>> x["foo"]
42
>>> x["baz"]
(1, 2, 3)
>>> x["blafasel"] = 23
>>> x.should_save
True
参数: |
|
---|
True if the cookie was newly created, otherwise False
Whenever an item on the cookie is set, this attribute is set to True. However this does not track modifications inside mutable objects in the cookie:
>>> c = SecureCookie()
>>> c["foo"] = [1, 2, 3]
>>> c.modified
True
>>> c.modified = False
>>> c["foo"].append(4)
>>> c.modified
False
In that situation it has to be set to modified by hand so that should_save can pick it up.
The hash method to use. This has to be a module with a new function or a function that creates a hashlib object. Such as hashlib.md5 Subclasses can override this attribute. The default hash is sha1. Make sure to wrap this in staticmethod() if you store an arbitrary function there such as hashlib.sha1 which might be implemented as a function.
Loads a SecureCookie from a cookie in request. If the cookie is not set, a new SecureCookie instanced is returned.
参数: |
|
---|
Quote the value for the cookie. This can be any object supported by serialization_method.
参数: | value – the value to quote. |
---|
if the contents should be base64 quoted. This can be disabled if the serialization process returns cookie safe strings only.
Saves the SecureCookie in a cookie on response object. All parameters that are not described here are forwarded directly to set_cookie().
参数: |
|
---|
the module used for serialization. Unless overriden by subclasses the standard pickle module is used.
Serialize the secure cookie into a string.
If expires is provided, the session will be automatically invalidated after expiration when you unseralize it. This provides better protection against session cookie theft.
参数: | expires – an optional expiration date for the cookie (a datetime.datetime object) |
---|
Unquote the value for the cookie. If unquoting does not work a UnquoteError is raised.
参数: | value – the value to unquote. |
---|
Load the secure cookie from a serialized string.
参数: |
|
---|---|
返回: | a new SecureCookie. |
Internal exception used to signal failures on quoting.
The main problem with dynamic Web sites is, well, they’re dynamic. Each time a user requests a page, the webserver executes a lot of code, queries the database, renders templates until the visitor gets the page he sees.
This is a lot more expensive than just loading a file from the file system and sending it to the visitor.
For most Web applications, this overhead isn’t a big deal but once it becomes, you will be glad to have a cache system in place.
Caching is pretty simple. Basically you have a cache object lurking around somewhere that is connected to a remote cache or the file system or something else. When the request comes in you check if the current page is already in the cache and if so, you’re returning it from the cache. Otherwise you generate the page and put it into the cache. (Or a fragment of the page, you don’t have to cache the full thing)
Here is a simple example of how to cache a sidebar for a template:
def get_sidebar(user):
identifier = 'sidebar_for/user%d' % user.id
value = cache.get(identifier)
if value is not None:
return value
value = generate_sidebar_for(user=user)
cache.set(identifier, value, timeout=60 * 5)
return value
To create a cache object you just import the cache system of your choice from the cache module and instantiate it. Then you can start working with that object:
>>> from werkzeug.contrib.cache import SimpleCache
>>> c = SimpleCache()
>>> c.set("foo", "value")
>>> c.get("foo")
'value'
>>> c.get("missing") is None
True
Please keep in mind that you have to create the cache and put it somewhere you have access to it (either as a module global you can import or you just put it into your WSGI application).
Baseclass for the cache systems. All the cache systems implement this API or a superset of it.
参数: | default_timeout – the default timeout that is used if no timeout is specified on set(). |
---|
Works like set() but does not overwrite the values of already existing keys.
参数: |
|
---|
Clears the cache. Keep in mind that not all caches support completely clearing the cache.
Decrements the value of a key by delta. If the key does not yet exist it is initialized with -delta.
For supporting caches this is an atomic operation.
参数: |
|
---|
Deletes key from the cache. If it does not exist in the cache nothing happens.
参数: | key – the key to delete. |
---|
Deletes multiple keys at once.
参数: | keys – The function accepts multiple keys as positional arguments. |
---|
Looks up key in the cache and returns the value for it. If the key does not exist None is returned instead.
参数: | key – the key to be looked up. |
---|
Works like get_many() but returns a dict:
d = cache.get_dict("foo", "bar")
foo = d["foo"]
bar = d["bar"]
参数: | keys – The function accepts multiple keys as positional arguments. |
---|
Returns a list of values for the given keys. For each key a item in the list is created. Example:
foo, bar = cache.get_many("foo", "bar")
If a key can’t be looked up None is returned for that key instead.
参数: | keys – The function accepts multiple keys as positional arguments. |
---|
Increments the value of a key by delta. If the key does not yet exist it is initialized with delta.
For supporting caches this is an atomic operation.
参数: |
|
---|
Adds a new key/value to the cache (overwrites value, if key already exists in the cache).
参数: |
|
---|
Sets multiple keys and values from a mapping.
参数: |
|
---|
A cache that doesn’t cache. This can be useful for unit testing.
参数: | default_timeout – a dummy parameter that is ignored but exists for API compatibility with other caches. |
---|
Simple memory cache for single process environments. This class exists mainly for the development server and is not 100% thread safe. It tries to use as many atomic operations as possible and no locks for simplicity but it could happen under heavy load that keys are added multiple times.
参数: |
|
---|
A cache that uses memcached as backend.
The first argument can either be an object that resembles the API of a memcache.Client or a tuple/list of server addresses. In the event that a tuple/list is passed, Werkzeug tries to import the best available memcache library.
Implementation notes: This cache backend works around some limitations in memcached to simplify the interface. For example unicode keys are encoded to utf-8 on the fly. Methods such as get_dict() return the keys in the same format as passed. Furthermore all get methods silently ignore key errors to not cause problems when untrusted user data is passed to the get methods which is often the case in web applications.
参数: |
|
---|
This class is deprecated in favour of MemcachedCache which now supports Google Appengine as well.
在 0.8 版更改: Deprecated in favour of MemcachedCache.
Uses the Redis key-value store as a cache backend.
The first argument can be either a string denoting address of the Redis server or an object resembling an instance of a redis.Redis class.
Note: Python Redis API already takes care of encoding unicode strings on the fly.
0.7 新版功能.
0.8 新版功能: key_prefix was added.
在 0.8 版更改: This cache backend now properly serializes objects.
在 0.8.3 版更改: This cache backend now supports password authentication.
参数: |
|
---|
A cache that stores the items on the file system. This cache depends on being the only user of the cache_dir. Make absolutely sure that nobody but this cache stores files there or otherwise the cache will randomly delete files therein.
参数: |
|
---|
Extra wrappers or mixins contributed by the community. These wrappers can be mixed in into request objects to add extra functionality.
Example:
from werkzeug.wrappers import Request as RequestBase
from werkzeug.contrib.wrappers import JSONRequestMixin
class Request(RequestBase, JSONRequestMixin):
pass
Afterwards this request object provides the extra functionality of the JSONRequestMixin.
Add json method to a request object. This will parse the input data through simplejson if possible.
BadRequest will be raised if the content-type is not json or if the data itself cannot be parsed as json.
Get the result of simplejson.loads if possible.
Add protobuf parsing method to a request object. This will parse the input data through protobuf if possible.
BadRequest will be raised if the content-type is not protobuf or if the data itself cannot be parsed property.
Parse the data into an instance of proto_type.
by default the ProtobufRequestMixin will raise a BadRequest if the object is not initialized. You can bypass that check by setting this attribute to False.
This request mixin adds support for the wsgiorg routing args specification.
The positional URL arguments as tuple.
The keyword URL arguments as dict.
This mixin reverses the trailing slash behavior of script_root and path. This makes it possible to use urljoin() directly on the paths.
Because it changes the behavior or Request this class has to be mixed in before the actual request class:
class MyRequest(ReverseSlashBehaviorRequestMixin, Request):
pass
This example shows the differences (for an application mounted on /application and the request going to /application/foo/bar):
normal behavior reverse behavior script_root /application /application/ path /foo/bar foo/bar
Requested path as unicode. This works a bit like the regular path info in the WSGI environment but will not include a leading slash.
The root path of the script includling a trailing slash.
“If this mixin is mixed into a request class it will provide a dynamic charset attribute. This means that if the charset is transmitted in the content type headers it’s used from there.
Because it changes the behavior or Request this class has to be mixed in before the actual request class:
class MyRequest(DynamicCharsetRequestMixin, Request):
pass
By default the request object assumes that the URL charset is the same as the data charset. If the charset varies on each request based on the transmitted data it’s not a good idea to let the URLs change based on that. Most browsers assume either utf-8 or latin1 for the URLs if they have troubles figuring out. It’s strongly recommended to set the URL charset to utf-8:
class MyRequest(DynamicCharsetRequestMixin, Request):
url_charset = 'utf-8'
0.6 新版功能.
The charset from the content type.
the default charset that is assumed if the content type header is missing or does not contain a charset parameter. The default is latin1 which is what HTTP specifies as default charset. You may however want to set this to utf-8 to better support browsers that do not transmit a charset for incoming data.
Called if a charset was provided but is not supported by the Python codecs module. By default latin1 is assumed then to not lose any information, you may override this method to change the behavior.
参数: | charset – the charset that was not found. |
---|---|
返回: | the replacement charset. |
If this mixin is mixed into a response class it will provide a dynamic charset attribute. This means that if the charset is looked up and stored in the Content-Type header and updates itself automatically. This also means a small performance hit but can be useful if you’re working with different charsets on responses.
Because the charset attribute is no a property at class-level, the default value is stored in default_charset.
Because it changes the behavior or Response this class has to be mixed in before the actual response class:
class MyResponse(DynamicCharsetResponseMixin, Response):
pass
0.6 新版功能.
The charset for the response. It’s stored inside the Content-Type header as a parameter.
the default charset.
This module implements a IterIO that converts an iterator into a stream object and the other way round. Converting streams into iterators requires the greenlet module.
To convert an iterator into a stream all you have to do is to pass it directly to the IterIO constructor. In this example we pass it a newly created generator:
def foo():
yield "something\n"
yield "otherthings"
stream = IterIO(foo())
print stream.read() # read the whole iterator
The other way round works a bit different because we have to ensure that the code execution doesn’t take place yet. An IterIO call with a callable as first argument does two things. The function itself is passed an IterIO stream it can feed. The object returned by the IterIO constructor on the other hand is not an stream object but an iterator:
def foo(stream):
stream.write("some")
stream.write("thing")
stream.flush()
stream.write("otherthing")
iterator = IterIO(foo)
print iterator.next() # prints something
print iterator.next() # prints otherthing
iterator.next() # raises StopIteration
Instances of this object implement an interface compatible with the standard Python file object. Streams are either read-only or write-only depending on how the object is created.
If the first argument is an iterable a file like object is returned that returns the contents of the iterable. In case the iterable is empty read operations will return the sentinel value.
If the first argument is a callable then the stream object will be created and passed to that function. The caller itself however will not receive a stream but an iterable. The function will be be executed step by step as something iterates over the returned iterable. Each call to flush() will create an item for the iterable. If flush() is called without any writes in-between the sentinel value will be yielded.
Note for Python 3: due to the incompatible interface of bytes and streams you should set the sentinel value explicitly to an empty bytestring (b'') if you are expecting to deal with bytes as otherwise the end of the stream is marked with the wrong sentinel value.
0.9 新版功能: sentinel parameter was added.
0.5 新版功能.
This module includes various helpers that fix bugs in web servers. They may be necessary for some versions of a buggy web server but not others. We try to stay updated with the status of the bugs as good as possible but you have to make sure whether they fix the problem you encounter.
If you notice bugs in webservers not fixed in this module consider contributing a patch.
Wrap the application in this middleware if you are using FastCGI or CGI and you have problems with your app root being set to the cgi script’s path instead of the path users are going to visit
在 0.9 版更改: Added app_root parameter and renamed from LighttpdCGIRootFix.
参数: |
|
---|
On windows environment variables are limited to the system charset which makes it impossible to store the PATH_INFO variable in the environment without loss of information on some systems.
This is for example a problem for CGI scripts on a Windows Apache.
This fixer works by recreating the PATH_INFO from REQUEST_URI, REQUEST_URL, or UNENCODED_URL (whatever is available). Thus the fix can only be applied if the webserver supports either of these variables.
参数: | app – the WSGI application |
---|
This middleware can be applied to add HTTP proxy support to an application that was not designed with HTTP proxies in mind. It sets REMOTE_ADDR, HTTP_HOST from X-Forwarded headers.
If you have more than one proxy server in front of your app, set num_proxies accordingly.
Do not use this middleware in non-proxy setups for security reasons.
The original values of REMOTE_ADDR and HTTP_HOST are stored in the WSGI environment as werkzeug.proxy_fix.orig_remote_addr and werkzeug.proxy_fix.orig_http_host.
参数: |
|
---|
Selects the new remote addr from the given list of ips in X-Forwarded-For. By default it picks the one that the num_proxies proxy server provides. Before 0.9 it would always pick the first.
0.8 新版功能.
This middleware can remove response headers and add others. This is for example useful to remove the Date header from responses if you are using a server that adds that header, no matter if it’s present or not or to add X-Powered-By headers:
app = HeaderRewriterFix(app, remove_headers=['Date'],
add_headers=[('X-Powered-By', 'WSGI')])
参数: |
|
---|
This middleware fixes a couple of bugs with Microsoft Internet Explorer. Currently the following fixes are applied:
If it does not detect affected Internet Explorer versions it won’t touch the request / response.
This module provides a simple WSGI profiler middleware for finding bottlenecks in web application. It uses the profile or cProfile module to do the profiling and writes the stats to the stream provided (defaults to stderr).
Example usage:
from werkzeug.contrib.profiler import ProfilerMiddleware
app = ProfilerMiddleware(app)
An object that redirects write calls to multiple streams. Use this to log to both sys.stdout and a file:
f = open('profiler.log', 'w')
stream = MergeStream(sys.stdout, f)
profiler = ProfilerMiddleware(app, stream)
Simple profiler middleware. Wraps a WSGI application and profiles a request. This intentionally buffers the response so that timings are more exact.
By giving the profile_dir argument, pstat.Stats files are saved to that directory, one file per request. Without it, a summary is printed to stream instead.
For the exact meaning of sort_by and restrictions consult the profile documentation.
0.9 新版功能: Added support for restrictions and profile_dir.
参数: |
|
---|
Return a new callback for werkzeug.script that starts a local server with the profiler enabled.
from werkzeug.contrib import profiler
action_profile = profiler.make_action(make_app)
0.5 新版功能.
This module provides a middleware that performs sanity checks of the WSGI application. It checks that PEP 333 is properly implemented and warns on some common HTTP errors such as non-empty responses for 304 status codes.
This module provides a middleware, the LintMiddleware. Wrap your application with it and it will warn about common problems with WSGI and HTTP while your application is running.
It’s strongly recommended to use it during development.
This middleware wraps an application and warns on common errors. Among other thing it currently checks for the following problems:
Detected errors are emitted using the standard Python warnings system and usually end up on stderr.
from werkzeug.contrib.lint import LintMiddleware
app = LintMiddleware(app)
参数: | app – the application to wrap |
---|
This page covers important terms used in the documentation and Werkzeug itself.
WSGI a specification for Python web applications Werkzeug follows. It was specified in the PEP 333 and is widely supported. Unlike previous solutions it gurantees that web applications, servers and utilties can work together.
For Werkzeug, a response object is an object that works like a WSGI application but does not do any request processing. Usually you have a view function or controller method that processes the request and assambles a response object.
A response object is not necessarily the BaseResponse object or a subclass thereof.
For example Pylons/webob provide a very similar response class that can be used as well (webob.Response).
Often people speak of MVC (Model, View, Controller) when developing web applications. However, the Django framework coined MTV (Model, Template, View) which basically means the same but reduces the concept to the data model, a function that processes data from the request and the database and renders a template.
Werkzeug itself does not tell you how you should develop applications, but the documentation often speaks of view functions that work roughly the same. The idea of a view function is that it’s called with a request object (and optionally some parameters from an URL rule) and returns a response object.
Since early Python 2 days unicode was part of all default Python builds. It allows developers to write applications that deal with non-ASCII characters in a straightforward way. But working with unicode requires a basic knowledge about that matter, especially when working with libraries that do not support it.
Werkzeug uses unicode internally everywhere text data is assumed, even if the HTTP standard is not unicode aware as it. Basically all incoming data is decoded from the charset specified (per default utf-8) so that you don’t operate on bytestrings any more. Outgoing unicode data is then encoded into the target charset again.
In Python 2 there are two basic string types: str and unicode. str may carry encoded unicode data but it’s always represented in bytes whereas the unicode type does not contain bytes but charpoints. What does this mean? Imagine you have the German Umlaut ö. In ASCII you cannot represent that character, but in the latin-1 and utf-8 character sets you can represent it, but they look differently when encoded:
>>> u'ö'.encode('latin1')
'\xf6'
>>> u'ö'.encode('utf-8')
'\xc3\xb6'
So an ö might look totally different depending on the encoding which makes it hard to work with it. The solution is using the unicode type (as we did above, note the u prefix before the string). The unicode type does not store the bytes for ö but the information, that this is a LATIN SMALL LETTER O WITH DIAERESIS.
Doing len(u'ö') will always give us the expected “1” but len('ö') might give different results depending on the encoding of 'ö'.
The problem with unicode is that HTTP does not know what unicode is. HTTP is limited to bytes but this is not a big problem as Werkzeug decodes and encodes for us automatically all incoming and outgoing data. Basically what this means is that data sent from the browser to the web application is per default decoded from an utf-8 bytestring into a unicode string. Data sent from the application back to the browser that is not yet a bytestring is then encoded back to utf-8.
Usually this “just works” and we don’t have to worry about it, but there are situations where this behavior is problematic. For example the Python 2 IO layer is not unicode aware. This means that whenever you work with data from the file system you have to properly decode it. The correct way to load a text file from the file system looks like this:
f = file('/path/to/the_file.txt', 'r')
try:
text = f.decode('utf-8') # assuming the file is utf-8 encoded
finally:
f.close()
There is also the codecs module which provides an open function that decodes automatically from the given encoding.
With Werkzeug 0.3 onwards you can further control the way Werkzeug works with unicode. In the past Werkzeug ignored encoding errors silently on incoming data. This decision was made to avoid internal server errors if the user tampered with the submitted data. However there are situations where you want to abort with a 400 BAD REQUEST instead of silently ignoring the error.
All the functions that do internal decoding now accept an errors keyword argument that behaves like the errors parameter of the builtin string method decode. The following values are possible:
Unlike the regular python decoding Werkzeug does not raise an UnicodeDecodeError if the decoding failed but an HTTPUnicodeError which is a direct subclass of UnicodeError and the BadRequest HTTP exception. The reason is that if this exception is not caught by the application but a catch-all for HTTP exceptions exists a default 400 BAD REQUEST error page is displayed.
There is additional error handling available which is a Werkzeug extension to the regular codec error handling which is called fallback. Often you want to use utf-8 but support latin1 as legacy encoding too if decoding failed. For this case you can use the fallback error handling. For example you can specify 'fallback:iso-8859-15' to tell Werkzeug it should try with iso-8859-15 if utf-8 failed. If this decoding fails too (which should not happen for most legacy charsets such as iso-8859-15) the error is silently ignored as if the error handling was ignore.
Further details are available as part of the API documentation of the concrete implementations of the functions or classes working with unicode.
As request and response objects usually are the central entities of Werkzeug powered applications you can change the default encoding Werkzeug operates on by subclassing these two classes. For example you can easily set the application to utf-7 and strict error handling:
from werkzeug.wrappers import BaseRequest, BaseResponse
class Request(BaseRequest):
charset = 'utf-7'
encoding_errors = 'strict'
class Response(BaseResponse):
charset = 'utf-7'
Keep in mind that the error handling is only customizable for all decoding but not encoding. If Werkzeug encounters an encoding error it will raise a UnicodeEncodeError. It’s your responsibility to not create data that is not present in the target charset (a non issue with all unicode encodings such as utf-8).
The most important rule about web development is “Do not trust the user”. This is especially true for incoming request data on the input stream. With WSGI this is actually a bit harder than you would expect. Because of that Werkzeug wraps the request stream for you to save you from the most prominent problems with it.
The input stream has no end-of-file marker. If you would call the read() method on the wsgi.input stream you would cause your application to hang on conforming servers. This is actually intentional however painful. Werkzeug solves that problem by wrapping the input stream in a special LimitedStream. The input stream is exposed on the request objects as stream. This one is either an empty stream (if the form data was parsed) or a limited stream with the contents of the input stream.
Werkzeug parses the incoming data under the following situations:
These calls are not interchangeable. If you invoke parse_form_data() you must not use the request object or at least not the attributes that trigger the parsing process.
This is also true if you read from the wsgi.input stream before the parsing.
General rule: Leave the WSGI input stream alone. Especially in WSGI middlewares. Use either the parsing functions or the request object. Do not mix multiple WSGI utility libraries for form data parsing or anything else that works on the input stream.
The standard Werkzeug parsing behavior handles three cases:
Special note on the get_data method: Calling this loads the full request data into memory. This is only safe to do if the max_content_length is set. Also you can either read the stream or call get_data().
To avoid being the victim of a DDOS attack you can set the maximum accepted content length and request field sizes. The BaseRequest class has two attributes for that: max_content_length and max_form_memory_size.
The first one can be used to limit the total content length. For example by setting it to 1024 * 1024 * 16 the request won’t accept more than 16MB of transmitted data.
Because certain data can’t be moved to the hard disk (regular post data) whereas temporary files can, there is a second limit you can set. The max_form_memory_size limits the size of POST transmitted form data. By setting it to 1024 * 1024 * 2 you can make sure that all in memory-stored fields is not more than 2MB in size.
This however does not affect in-memory stored files if the stream_factory used returns a in-memory file.
Modern web applications transmit a lot more than multipart form data or url encoded data. Extending the parsing capabilities by subclassing the BaseRequest is simple. The following example implements parsing for incoming JSON data:
from werkzeug.utils import cached_property
from werkzeug.wrappers import Request
from simplejson import loads
class JSONRequest(Request):
# accept up to 4MB of transmitted data.
max_content_length = 1024 * 1024 * 4
@cached_property
def json(self):
if self.headers.get('content-type') == 'application/json':
return loads(self.data)
This file lists all major changes in Werkzeug over the versions. For API breaking changes have a look at API Changes, they are listed there in detail.
Release date and codename to be decided
(bugfix release, release date to be decided)
(bugfix release, released on August 26th 2013)
(bugfix release, released on July 25th 2013)
Restored behavior of the data descriptor of the request class to pre 0.9 behavior. This now also means that .data and .get_data() have different behavior. New code should use .get_data() always.
In addition to that there is now a flag for the .get_data() method that controls what should happen with form data parsing and the form parser will honor cached data. This makes dealing with custom form data more consistent.
(bugfix release, released on July 18th 2013)
(bugfix release, released on June 14th 2013)
Released on June 13nd 2013, codename Planierraupe.
(bugfix release, release date to be announced)
(bugfix release, released on February 5th 2012)
(bugfix release, released on December 16th 2011)
(bugfix release, released on September 30th 2011)
Released on September 29th 2011, codename Lötkolben
(bugfix release, released on September 30th 2011)
(bugfix release, released on July 26th 2011)
Released on July 24th 2011, codename Schraubschlüssel
(bugfix release, released on April 23th 2010)
(bugfix release, released on April 13th 2010)
Released on Feb 19th 2010, codename Hammer.
(bugfix release for 0.5, released on July 9th 2009)
Released on April 24th, codename Schlagbohrer.
(Bugfix release, released on January 11th 2009)
Released on November 23rd 2008, codename Schraubenzieher.
(bugfix release, released on June 24th 2008)
Released on June 14th 2008, codename EUR325CAT6.
Released Feb 14th 2008, codename Faustkeil.
如果你找不到想要的信息,可以查看目录或者尝试搜索功能。