Motivation:
There are use-cases for SwiftNIO where there are no actual sockets but
rather two pipe file descriptors - one for input, one for output.
There's no real reason why SwiftNIO shouldn't work for those.
Modifications:
Add a PipeChannel.
Result:
More use-cases for SwiftNIO.
Motivation:
The integration tests depended on python in order to print 80,000 x
characters.
Modifications:
Remove the python dependency and express the same with `dd` & `tr`.
Result:
Fewer dependencies.
Motivation:
Due to https://bugs.swift.org/browse/SR-10614, assigning an array containing a tuple type
to a variable that expects an array containing a different-but-compatible tuple type will cause
an allocation and copy of that array storage.
In some cases this is necessary, but we were doing it in the HTTPHeaders constructors, which meant
that the swift-nio-http2 code was hitting a hilariously over the top number of allocations.
Modifications:
- Change the internal storage type of HTTPHeaders to match the public constructor.
Result:
Fewer allocations when constructing HTTPHeaders
Motivation:
The file-io implementation in the example HTTP1 server just crashed on
empty file ;).
Modifications:
- fix the crash
- add integration test
Result:
fewer crashes
Motivation:
The HTTP/1 headers were quite complicated, CoW-boxed and exposed their
guts (being a ByteBuffer). In Swift 5 this is no longer necessary
because of native UTF-8 Strings.
Modifications:
- make headers simply `[(String, String)]`
- remove `HTTPHeaderIndex`, `HTTPListHeaderIterator`, etc
Result:
- simpler and more performant code
* Use fixed default string to write in response and do not hardcode content length
Motivation:
Improve example code, resolve#714
Modifications:
Move hardcoded "Hello world" string to a constant property and removed hardcoding of content length header value
Result:
In example code content length will be written correctly when default response changes
Motivation:
As #563 shows, sometimes in CI we don't reach the number of open
expected file descriptors. The current assumption is that this is just a
race when CI is slow. This allows up to 10s to reach the correct number
of open file descriptors.
Modifications:
- allow up to 10s to reach the correct number of fds
Result:
hopefully fixing #563 finally
Motivation:
We should support the latest and greatest.
Modifications:
- add Ubuntu 18.04 support (packages exist for Swift 4.2 only)
- add Swift 4.2 support (for Ubuntu 14.04, 16.04 and 18.04)
Result:
- supporting the latest stuff
Motivation:
We have a flaky integration test (#563) and I can't reproduce locally in
docker unfortunately. We have an `lsof` output but it just reads
NIOHTTP1S 5157 root 12u sock 0,7 0t0 113688082 can't identify protocol
which isn't very helpful. That's a known `lsof` on Linux but and the
recommendation is to run `netstat` which we now do.
Modifications:
add netstat output to the integration tests debugging info.
Result:
hopefully more information so we can soon fix#563
Motivation:
Once again, we had an extra event loop hop between a channel
registration and its activation. Usually this shows up as `EPOLLHUP` but
not so for accepted channels. What happened instead is that we had a
small race window after we accepted a channel. It was in a state where
it was not marked active _yet_ and therefore we'd not read out its data
in case we received a `.readEOF`. That usually leads to a stale
connection. Fortunately it doesn't happen very often that the client
connects, immediately sends some data and then shuts the write end of
the socket.
Modifications:
prevent the event loop hop between registration and activation
Result:
will always read out the read buffer on .readEOF
Motivation:
We had a number of problems:
1. We wanted to lazily process input EOFs and connection resets only
when the user actually calls `read()`. On Linux however you cannot
unsubscribe from `EPOLLHUP` so that's not possible.
2. Lazily processing input EOFs/connection resets wastes kernel
resources and that could potentially lead to a DOS
3. The very low-level `Selector` interpreted the eventing mechanism's
events quite a lot so the `EventLoop`/`Channel` only ever saw
`readable` or `writable` without further information what exactly
happened.
4. We completely ignored `EPOLLHUP` until now which on Unix Domain
Socket close leads to a 100% CPU spin (issue #277)
Modifications:
- made the `Selector` interface richer, it now sends the following
events: `readable`, `writable`, `readEOF` (input EOF), `reset`
(connection reset or some error)
- process input EOFs and connection resets/errors eagerly
- change all tests which relied on using unconnected and unbound sockets
to user connected/bound ones as `epoll_wait` otherwise would keep
sending us a stream of `EPOLLHUP`s which would now lead to an eager
close
Result:
- most importantly: fix issue #277
- waste less kernel resources (by dealing with input EOFs/connection
resets eagerly)
- bring kqueue/epoll more in line
Motivation:
`ab -k` behaves weirdly: it'll send an HTTP/1.0 request with keep-alive set
but stops doing anything at all if the server doesn't also set Connection: keep-alive
which our example HTTP1Server didn't do.
Modifications:
In the HTTP1Server example if we receive an HTTP/1.0 request with
Connection: keep-alive, we'll now set keep-alive too.
Result:
ab -k doesn't get stuck anymore.
Motivation:
We are currently parsing each header eagly which means we need to convert from bytes to String frequently. The reality is that most of the times the user is not really interested in all the headers and so it is kind of wasteful to do so.
Modification:
Rewrite our internal storage of HTTPHeaders to use a ByteBuffer as internal storage and so only parse headers on demand.
Result:
Less overhead for parsing headers.
Motivation:
We have one integration test left disabled, but actually it basically
already passes: it just expects the wrong format of error. We should
have all our integration tests enabled.
Modifications:
Enabled the test.
Changed the wording of the error it expected.
Result:
No disabled integration tests.
Motivation:
Currently the HTTP decoders can throw errors, but they will be ignored
and lead to a simple EOF. That's not ideal: in most cases we should make
a best-effort attempt to send a 4XX error code before we shut the client
down.
Modifications:
Provided a new ChannelHandler that generates 400 errors when the HTTP
decoder fails.
Added a flag to automatically add that handler to the channel pipeline.
Added the handler to the HTTP sample server.
Enabled integration test 12.
Result:
Easier error handling for HTTP servers.
Motivation:
Previously we tried to use `host` to determine localhost's IPv4 address.
That didn't work reliably in Docker and sometimes on macOS. Let's just
fix this to 127.0.0.1.
Modifications:
Hardcode localhost's IPv4 address as 127.0.0.1.
Result:
More stable test_16_tcp_client_ip
Motivation:
HTTP pipelining can be tricky to handle properly on the server side.
In particular, it's very easy to write out of order or inconsistently
mutate state. Users often need help to handle this appropriately.
Modifications:
Added a HTTPServerPipelineHandler that only lets one request through
at a time.
Result:
Better servers that are more able to handle HTTP pipelining
Motivation:
Our shell scripts didn't have license headers but they should.
Modifications:
Add licensing headers to all shell scripts
Result:
Licensing clear for shell scripts too.
Motivation:
Currently our HTTP server almost always just assumes that keep-alive
is enabled. It really shouldn't, it should validate more carefully.
Additionally, it mishandles TCP half-close in a way that breaks with
netcat.
Modifications:
Added keep-alive tracking to all handlers and ensured that connections
will be closed after a response to a keep-alive request has been sent.
Also add support for handling TCP half-close.
Result:
Non-keep-alive connections will be closed by the server in all cases.
Half-closed connections will be appropriately managed.