NEW_TOKEN frame is never emitted by a client, hence parsing was not
tested on frontend side.
On backend side, an issue can occur, as expected token length is static,
based on the token length used internally by haproxy. This is not
sufficient for most server implementation which uses larger token. This
causes a parsing error, which may cause skipping of following frames in
the same packet. This issue was detected using ngtcp2 as server.
As for now tokens are unused by haproxy, simply discard test on token
length during NEW_TOKEN frame parsing. The token itself is merely
skipped without being stored. This is sufficient for now to continue on
experimenting with QUIC backend implementation.
This does not need to be backported.
{
struct qf_new_token *new_token_frm = &frm->new_token;
- if (!quic_dec_int(&new_token_frm->len, pos, end) || end - *pos < new_token_frm->len ||
- sizeof(new_token_frm->data) < new_token_frm->len)
+ if (!quic_dec_int(&new_token_frm->len, pos, end) || end - *pos < new_token_frm->len)
return 0;
+ /* TODO token length is unknown as it is dependent from the peer. Hence
+ * dynamic allocation should be implemented for token storage, albeit
+ * with constraint to ensure memory usage remains reasonable.
+ */
+#if 0
+ if (sizeof(new_token_frm->data) < new_token_frm->len)
+ return 0;
memcpy(new_token_frm->data, *pos, new_token_frm->len);
+#endif
+
*pos += new_token_frm->len;
return 1;
goto err;
}
else {
- /* TODO */
+ /* TODO NEW_TOKEN not implemented on client side.
+ * Note that for now token is not copied into <data> field
+ * of qf_new_token frame. See quic_parse_new_token_frame()
+ * for further explanations.
+ */
}
break;
case QUIC_FT_STREAM_8 ... QUIC_FT_STREAM_F: