### Fixed
* `response.iter_bytes()` no longer raises a ValueError when called on a response with no content. (Pull #1827)
-* The `'wsgi.error'` configuration now defaults to `sys.stderr`, and is corrected to be a `TextIO` interface, not a `BytesIO` interface. Additionally, the WSGITransport now accepts a `wsgi_error` confguration. (Pull #1828)
+* The `'wsgi.error'` configuration now defaults to `sys.stderr`, and is corrected to be a `TextIO` interface, not a `BytesIO` interface. Additionally, the WSGITransport now accepts a `wsgi_error` configuration. (Pull #1828)
* Follow the WSGI spec by properly closing the iterable returned by the application. (Pull #1830)
## 0.19.0 (19th August, 2021)
The 0.14 release includes a range of improvements to the public API, intended on preparing for our upcoming 1.0 release.
-* Our HTTP/2 support is now fully optional. **You now need to use `pip install httpx[http2]` if you want to include the HTTP/2 dependancies.**
+* Our HTTP/2 support is now fully optional. **You now need to use `pip install httpx[http2]` if you want to include the HTTP/2 dependencies.**
* Our HSTS support has now been removed. Rewriting URLs from `http` to `https` if the host is on the HSTS list can be beneficial in avoiding roundtrips to incorrectly formed URLs, but on balance we've decided to remove this feature, on the principle of least surprise. Most programmatic clients do not include HSTS support, and for now we're opting to remove our support for it.
* Our exception hierarchy has been overhauled. Most users will want to stick with their existing `httpx.HTTPError` usage, but we've got a clearer overall structure now. See https://www.python-httpx.org/exceptions/ for more details.
- The SSL configuration settings of `verify`, `cert`, and `trust_env` now raise warnings if used per-request when using a Client instance. They should always be set on the Client instance itself. (Pull #597)
- Use plain strings "TUNNEL_ONLY" or "FORWARD_ONLY" on the HTTPProxy `proxy_mode` argument. The `HTTPProxyMode` enum still exists, but its usage will raise warnings. (#610)
- Pool timeouts are now on the timeout configuration, not the pool limits configuration. (Pull #563)
-- The timeout configuration is now named `httpx.Timeout(...)`, not `httpx.TimeoutConfig(...)`. The old version currently remains as a synonym for backwards compatability. (Pull #591)
+- The timeout configuration is now named `httpx.Timeout(...)`, not `httpx.TimeoutConfig(...)`. The old version currently remains as a synonym for backwards compatibility. (Pull #591)
---
- Switch IDNA encoding from IDNA 2003 to IDNA 2008. (Pull #161)
- Expose base classes for alternate concurrency backends. (Pull #178)
- Improve Multipart parameter encoding. (Pull #167)
-- Add the `headers` proeprty to `BaseClient`. (Pull #159)
+- Add the `headers` property to `BaseClient`. (Pull #159)
- Add support for Google's `brotli` library. (Pull #156)
- Remove deprecated TLS versions (TLSv1 and TLSv1.1) from default `SSLConfig`. (Pull #155)
- Fix `URL.join(...)` to work similarly to RFC 3986 URL joining. (Pull #144)
`Client` instances also support features that aren't available at the top-level API, such as:
-- Cookie persistance across requests.
+- Cookie persistence across requests.
- Applying configuration across all outgoing requests.
- Sending requests through HTTP proxies.
- Using [HTTP/2](http2.md).
do so explicitly...
```python
-respose = client.get(url, follow_redirects=True)
+response = client.get(url, follow_redirects=True)
```
Or else instantiate a client, with redirect following enabled by default...
## Content encoding
-HTTPX uses `utf-8` for encoding `str` request bodies. For example, when using `content=<str>` the request body will be encoded to `utf-8` before being sent over the wire. This differs from Requests which uses `latin1`. If you need an explicit encoding, pass encoded bytes explictly, e.g. `content=<str>.encode("latin1")`.
+HTTPX uses `utf-8` for encoding `str` request bodies. For example, when using `content=<str>` the request body will be encoded to `utf-8` before being sent over the wire. This differs from Requests which uses `latin1`. If you need an explicit encoding, pass encoded bytes explicitly, e.g. `content=<str>.encode("latin1")`.
For response bodies, assuming the server didn't send an explicit encoding then HTTPX will do its best to figure out an appropriate encoding. HTTPX makes a guess at the encoding to use for decoding the response using `charset_normalizer`. Fallback to that or any content with less than 32 octets will be decoded using `utf-8` with the `error="replace"` decoder strategy.
## Cookies
def __del__(self) -> None:
# We use 'getattr' here, to manage the case where '__del__()' is called
- # on a partically initiallized instance that raised an exception during
+ # on a partially initiallized instance that raised an exception during
# the call to '__init__()'.
if getattr(self, "_state", None) == ClientState.OPENED: # noqa: B009
self.close()
def __del__(self) -> None:
# We use 'getattr' here, to manage the case where '__del__()' is called
- # on a partically initiallized instance that raised an exception during
+ # on a partially initiallized instance that raised an exception during
# the call to '__init__()'.
if getattr(self, "_state", None) == ClientState.OPENED: # noqa: B009
# Unlike the sync case, we cannot silently close the client when
returning a two-tuple of (<headers>, <stream>).
"""
if data is not None and not isinstance(data, dict):
- # We prefer to seperate `content=<bytes|str|byte iterator|bytes aiterator>`
+ # We prefer to separate `content=<bytes|str|byte iterator|bytes aiterator>`
# for raw request content, and `data=<form data>` for url encoded or
# multipart form content.
#
"""
The URL query string, as raw bytes, excluding the leading b"?".
- This is neccessarily a bytewise interface, because we cannot
+ This is necessarily a bytewise interface, because we cannot
perform URL decoding of this representation until we've parsed
the keys and values into a QueryParams instance.
port = kwargs.pop("port", self.port)
if host and ":" in host and host[0] != "[":
- # IPv6 addresses need to be escaped within sqaure brackets.
+ # IPv6 addresses need to be escaped within square brackets.
host = f"[{host}]"
kwargs["netloc"] = (
def items(self) -> typing.ItemsView[str, str]:
"""
Return `(key, value)` items of headers. Concatenate headers
- into a single comma seperated value when a key occurs multiple times.
+ into a single comma separated value when a key occurs multiple times.
"""
values_dict: typing.Dict[str, str] = {}
for _, key, value in self._list:
def multi_items(self) -> typing.List[typing.Tuple[str, str]]:
"""
Return a list of `(key, value)` pairs of headers. Allow multiple
- occurences of the same key without concatenating into a single
- comma seperated value.
+ occurrences of the same key without concatenating into a single
+ comma separated value.
"""
return [
(key.decode(self.encoding), value.decode(self.encoding))
def get(self, key: str, default: typing.Any = None) -> typing.Any:
"""
- Return a header value. If multiple occurences of the header occur
+ Return a header value. If multiple occurrences of the header occur
then concatenate them together with commas.
"""
try:
def get_list(self, key: str, split_commas: bool = False) -> typing.List[str]:
"""
Return a list of all header values for a given key.
- If `split_commas=True` is passed, then any comma seperated header
+ If `split_commas=True` is passed, then any comma separated header
values are split into multiple return strings.
"""
get_header_key = key.lower().encode(self.encoding)
@property
def apparent_encoding(self) -> typing.Optional[str]:
"""
- Return the encoding, as detemined by `charset_normalizer`.
+ Return the encoding, as determined by `charset_normalizer`.
"""
content = getattr(self, "_content", b"")
if len(content) < 32:
At this layer of API we're simply using plain primitives. No `Request` or
`Response` models, no fancy `URL` or `Header` handling. This strict point
- of cut-off provides a clear design seperation between the HTTPX API,
+ of cut-off provides a clear design separation between the HTTPX API,
and the low-level network handling.
Developers shouldn't typically ever need to call into this API directly,
Example usages...
-# Disable HTTP/2 on a single specfic domain.
+# Disable HTTP/2 on a single specific domain.
mounts = {
"all://": httpx.HTTPTransport(http2=True),
"all://*example.org": httpx.HTTPTransport()
def read(self) -> bytes:
"""
- Simple cases can use `.read()` as a convience method for consuming
+ Simple cases can use `.read()` as a convenience method for consuming
the entire stream and then closing it.
Example:
# on how names in `NO_PROXY` are handled.
if hostname == "*":
# If NO_PROXY=* is used or if "*" occurs as any one of the comma
- # seperated hostnames, then we should just bypass any information
+ # separated hostnames, then we should just bypass any information
# from HTTP_PROXY, HTTPS_PROXY, ALL_PROXY, and always ignore
# proxies.
return {}
{"ALL_PROXY": "http://localhost:123", "NO_PROXY": ".example1.com"},
None,
),
- # Proxied, because NO_PROXY subdomains only match if "." seperated.
+ # Proxied, because NO_PROXY subdomains only match if "." separated.
(
"https://www.example2.com",
{"ALL_PROXY": "http://localhost:123", "NO_PROXY": "ample2.com"},
def test_get_netrc_unknown():
netrc_info = NetRCInfo([str(FIXTURES_DIR / ".netrc")])
- assert netrc_info.get_credentials("nonexistant.org") is None
+ assert netrc_info.get_credentials("nonexistent.org") is None
@pytest.mark.parametrize(