Daniel Axtens [Tue, 17 Mar 2020 13:59:14 +0000 (00:59 +1100)]
REST: fix patch listing query
The patch listing query is punishingly slow under even very simple
filters.
The new data model in 3.0 will help _a lot_, so this is a simple fix: I
did try indexes but haven't got really deeply into the weeds of what we can do
with them.
Move a number of things from select_related to prefetch_related: we trade off
one big, inefficient query for a slightly larger number of significantly more
efficient queries.
On my laptop with 2 copies of the canonical kernel team list loaded into the
database, and considering only the API view (the JSON view is always faster)
with warm caches and considering the entire set of SQL queries:
- /api/patches/?project=1
~1.4-1.5s -> <100ms, something like 14x better
- /api/patches/?project=1&since=2010-11-01T00:00:00
~1.7-1.8s -> <560ms, something like 3x better (now dominated by the
counting query only invoked on the HTML API view,
not the pure JSON API view.)
The things I moved:
* project: this was generating SQL that looked like:
INNER JOIN `patchwork_project` T5
ON (`patchwork_submission`.`project_id` = T5.`id`)
This is correct but we've already had to join the patchwork_submission
table and perhaps as a result it seems to be inefficient.
* series__project: Likewise we've already had to join the series table,
doing another join is possibly why it is inefficient.
* delegate: I do not know why this was tanking performance. I think it
might relate to the strategy mysql was using.
Reported-by: Konstantin Ryabitsev <konstantin@linuxfoundation.org> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Stephen Finucane <stephen@that.guru>
Daniel Axtens [Tue, 17 Mar 2020 13:59:13 +0000 (00:59 +1100)]
REST: massively improve the patch counting query under filters
The DRF web view counts the patches as part of pagination.
The query it uses is a disaster zone:
SELECT Count(*) FROM (
SELECT DISTINCT
`patchwork_submission`.`id` AS Col1,
`patchwork_submission`.`msgid` AS Col2,
`patchwork_submission`.`date` AS Col3,
`patchwork_submission`.`submitter_id` AS Col4,
`patchwork_submission`.`project_id` AS Col5,
`patchwork_submission`.`name` AS Col6,
`patchwork_patch`.`submission_ptr_id` AS Col7,
`patchwork_patch`.`commit_ref` AS Col8,
`patchwork_patch`.`pull_url` AS Col9,
`patchwork_patch`.`delegate_id` AS Col10,
`patchwork_patch`.`state_id` AS Col11,
`patchwork_patch`.`archived` AS Col12,
`patchwork_patch`.`hash` AS Col13,
`patchwork_patch`.`patch_project_id` AS Col14,
`patchwork_patch`.`series_id` AS Col15,
`patchwork_patch`.`number` AS Col16,
`patchwork_patch`.`related_id` AS Col17
FROM `patchwork_patch`
INNER JOIN `patchwork_submission`
ON (`patchwork_patch`.`submission_ptr_id`=`patchwork_submission`.`id`)
WHERE `patchwork_submission`.`project_id`=1
)
This is because django-filters adds a DISTINCT qualifier on a
ModelMultiChoiceFilter by default. I guess it makes sense and they do a
decent job of justifying it, but it causes the count to be made with
this awful subquery. (The justification is that they don't know if you're
filtering on a to-many relationship, in which case there could be
duplicate values that need to be removed.)
While fixing that, we can also tell the filter to filter on patch_project
rather than submission's project, which allows us in some cases to avoid
the join entirely.
The resultant SQL is beautiful when filtering by project only:
SELECT COUNT(*) AS `__count`
FROM `patchwork_patch`
WHERE `patchwork_patch`.`patch_project_id` = 1
On my test setup (2x canonical kernel mailing list in the db, warm cache,
my laptop) this query goes from >1s to ~10ms, a ~100x improvement.
If we filter by project and date the query is still nice, but still also
very slow:
SELECT COUNT(*) AS `__count`
FROM `patchwork_patch`
INNER JOIN `patchwork_submission`
ON (`patchwork_patch`.`submission_ptr_id`=`patchwork_submission`.`id`)
WHERE (`patchwork_patch`.`patch_project_id`=1 AND `patchwork_submission`.`date`>='2010-11-01 00:00:00')
This us from ~1.3s to a bit under 400ms - still not ideal, but I'll take
the 3x improvement!
Reported-by: Konstantin Ryabitsev <konstantin@linuxfoundation.org> Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Stephen Finucane <stephen@that.guru>
Mete Polat [Thu, 27 Feb 2020 23:29:32 +0000 (23:29 +0000)]
REST: Add patch relations
View relations and add/update/delete them as a maintainer. Maintainers
can only create relations of patches which are part of a project they
maintain. Because this is a writable many-many nested relationship, it
behaves a little unusually. In short:
- All operations use PATCH to the 'related' field of a patch
- To relate a patch to another patch, say 7 to 19, either:
PATCH /api/patch/7 related := [19]
PATCH /api/patch/19 related := [7]
- To delete a patch from a relation, say 1, 21 and 42 are related but we
only want it to be 1 and 42:
PATCH /api/patch/21 related := []
* You _cannot_ remove a patch from a relation by patching another
patch in the relation: I'm trying to avoid read-modify-write loops.
* Relations that would be left with just 1 patch are deleted. This is
only ensured in the API - the admin interface will let you do this.
- Break-before-make: if you have [1, 12, 24] and [7, 15, 42] and you want
to end up with [1, 12, 15, 42], you have to remove 15 from the second
relation first:
PATCH /api/patch/1 related := [15] will fail with 409 Conflict.
Instead do:
PATCH /api/patch/15 related := []
PATCH /api/patch/1 related := [15]
-> 200 OK, [1, 12, 15, 42] and [7, 42] are the resulting relations
Signed-off-by: Mete Polat <metepolat2000@gmail.com> Signed-off-by: Stephen Finucane <stephen@that.guru> Signed-off-by: Daniel Axtens <dja@axtens.net>
Mete Polat [Thu, 27 Feb 2020 23:29:31 +0000 (23:29 +0000)]
models, templates: Add patch relations
Introduces the ability to add relations between patches. Relations are
displayed in the details page of a patch under 'Related'. Related
patches located in another projects can be viewed as well.
Changes to relations are tracked in events. Currently the display of
this is very bare in the API but that will be fixed in a subsequent patch:
this is the minimum required to avoid throwing errors when you view the
events feed.
Signed-off-by: Mete Polat <metepolat2000@gmail.com>
[dja: address some review comments from Stephen, add an admin view,
move to using Events, misc tidy-ups.] Signed-off-by: Daniel Axtens <dja@axtens.net>
Migration 0039 attempts to move patches that have ended up in an
arbitrary series due to race conditions into the correct series.
However, there are a number of race conditions that can occur here that
make this particularly tricky to do. Given that series are really just
arbitary metadata, it's not really necessary to do this...so don't.
Instead, just delete the series references that identical message IDs
and below to the same project, allowing us to add the uniqueness
constraint and prevent the issue bubbling up in the future. This means
we're still left with orphaned series but these could be fixed manually,
if necessary.
Signed-off-by: Stephen Finucane <stephen@that.guru> Tested-by: Ali Alnubani <alialnu@mellanox.com> Closes: #340
parser: Don't group patches with different versions in a series
As noted in #340 [1], if a patch from a series is dropped or
miscategorised, patches from a later revision of that series can end up
included in the earlier series rather than in their own series. This was
actually intentional as part of the fix for #105 [2]. However,
completely ignoring this information can be problematic. Refine things
by checking for versions and, if they don't match, using timeboxing to
try guess if they should be kept together. This would resolve the issue
seen in #340 while preventing a regression for #105.
Daniel Axtens [Tue, 28 Jan 2020 13:55:20 +0000 (00:55 +1100)]
pyenv: also install requirements for python2
The first time you do a migration with python3, you get a whole
lot of seemingly null changes. This is a bit annoying so I use
py2 to generate the changes. To do that, first fix the pyenv
transition so requirements are still installed for python2.
Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Stephen Finucane <stephen@that.guru>
Pranav Annam [Wed, 12 Feb 2020 03:35:47 +0000 (04:35 +0100)]
docker: update dependency for current build
The dependency libssl1.0-dev in the Dockerfile makes docker build fail:
The following packages have unmet dependencies:
libmysqlclient-dev : Depends: libssl-dev (>= 1.1.1-1ubuntu2.1~18.04.5~)
but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
There seems to be a conflict with different versions of libssl and
libmysqlclient that did not exist previously with Ubuntu 18.04.
Just use the current libssl-dev from Ubuntu 18.04 to fix the build.
Mete Polat [Wed, 29 Jan 2020 19:01:22 +0000 (20:01 +0100)]
REST: Fix duplicate project queries
Eliminates duplicate project queries caused by calling
get_absolute_url() in the embedded serializers. Following foreign keys
with 'series__project' will cache the project of the series as well as
the series itself.
Signed-off-by: Mete Polat <metepolat2000@gmail.com> Signed-off-by: Stephen Finucane <stephen@that.guru> Closes: #335
Another fix for copy-pasted pull requests, this time for cases
when something is copy-pasted from a terminal and retains all
the bogus trailing whitespace.
Add tests for the recent changes we made to how we parse multiple series
received at once. These tests actually highlighted what appeared to be
the test failure that's been intermittently breaking our CI for years
now, so the 'expectedFailure' marker has been removed in the hope that
this is actually the case.
Signed-off-by: Stephen Finucane <stephen@that.guru>
parser: Use a second query to weed out duplicate series
Annoyingly, not all email clients properly thread emails using the
message ID fields originally specified in RFC 822 [1]. Worse, some MTAs
(cough, outlook.com, cough) actually override what the client
configures, breaking the world in the process. Realising this is an
issue, Patchwork supports threading using arbitrary metadata in addition
to the RFC 822 metadata. Specifically, it uses a combination of
submitter and list-id extracted from the headers along with the series
version and total count metadata extracted from the subject. In addition
to this, we timebox things so that two or more series that match on all
of this metadata but which are sent some time apart from each other
aren't combined by accident. This does leave one edge case - duplicate
series received within the timebox will be combined. We've resigned
ourselves to this fact on the basis that it's extremely unlikely for all
of these things to go wrong at once.
Given all the above, there should be no reason that attempting to find
series by series markers should return more than one series. The
timeboxing will prevent us grouping similar looking series by accident
and the only other reason for this to happen is because we lost a race
and we should try again.
[1] https://tools.ietf.org/html/rfc822
Signed-off-by: Stephen Finucane <stephen@that.guru> Cc: Daniel Axtens <dja@axtens.net>
models: Use database constraints to prevent split Series
Currently, the 'SeriesReference' object has a unique constraint on the
two fields it has, 'series', which is a foreign key to 'Series', and
'msgid'. This is the wrong constraint. What we actually want to enforce
is that a patch, cover letter or comment is referenced by a single
series, or rather a single series per project the submission appears on.
As such, we should be enforcing uniqueness on the msgid and the project
that the patch, cover letter or comment belongs to.
This requires adding a new field to the object, 'project', since it's
not possible to do something like the following:
unique_together = [('msgid', 'series__project')]
This is detailed here [1]. In addition, the migration needs a precursor
migration step to merge any broken series.
These are failing due to differences in behavior of the backend. Since
this will never be used for production, we can simply skip these unit
tests and rely on the CI to catch potential issues.
Signed-off-by: Stephen Finucane <stephen@that.guru>
Stephen Finucane [Fri, 20 Dec 2019 10:49:21 +0000 (10:49 +0000)]
models: Add State.slug field
This reduces a lot of the tech debt we had built up around this. This
field is populated from a slugified representation of State.name and has
a uniqueness constraint. As a result, we also need to a uniqueness
constraint to the State.name field. This shouldn't be an issue and
probably should have been used from day one.
Note that there is no REST API impact for this since we've been using
state slugs all along.
Signed-off-by: Stephen Finucane <stephen@that.guru>
Stephen Finucane [Sat, 30 Nov 2019 18:48:35 +0000 (18:48 +0000)]
tests: Provide a way to disable API schema
The API schema validation is strict, in that it will error out with
invalid keys in either the request or response. Unfortunately the API
itself is not. We're hopefully going to fix this in a distant v2.0, but
for now we need a way to ensure that the API does what it's supposed to,
namely not set fields that don't exist or that the user isn't allowed to
set, even if proper error codes aren't raised.
This isn't actually used yet. That will come later.
Signed-off-by: Stephen Finucane <stephen@that.guru>
Mete Polat [Sat, 7 Dec 2019 16:46:20 +0000 (17:46 +0100)]
docs: Add missing series index schema
Fixes: 7d8e24bc84bd ("docs: Start documenting API using OpenAPI") Signed-off-by: Mete Polat <metepolat2000@gmail.com> Reviewed-by: Stephen Finucane <stephen@that.guru>
requirements: Add pyup markers to prevent dumb PRs
Until [1] is merged, we're going to have to override what these markers
are doing. Perhaps it would be easier to just specify the markers in the
comments as the actual marker, but I like using pip's features and the
comments *should* be temporary.
[1] https://github.com/pyupio/pyup/pull/367
Signed-off-by: Stephen Finucane <stephen@that.guru>
Johan Herland [Sun, 1 Dec 2019 01:49:53 +0000 (02:49 +0100)]
Include the responsible actor in applicable events
We want to use the events as an audit log. An important part of this is
recording _who_ made the changes that the events represent.
To accomplish this, we need to know the current user (aka. request.user)
at the point where we create the Event instance. Event creation is
currently triggered by signals, but neither the signal handlers, nor the
model classes themselves have easy access to request.user.
For some Patch-based events (patch-state-changed, patch-delegated), we
can do the following hack: The relevant events are created in signal
handlers that are all hooked up to either the pre_save or post_save
signals sent by Patch.save(). But before calling Patch.save(),
Patchwork must naturally query Patch.is_editable() to ascertain whether
the patch can in fact be changed by the current user. Thus, we only
need a way to communicate the current user from Patch.is_editable()
to the signal handlers that create the resulting Events. The Patch
object itself is available in both places, so we simply add an
'_edited_by' attribute to the instance (which fortunately is not
detected as a persistent db field by Django).
For the check-created event the current user always happens to be the
same as the 'user' field recorded in the Check object itself.
For the other Patch-based events (patch-created, patch-completed, and
series-completed), although they are also triggered by Patch.save(),
they are triggered as a result of incoming emails, hence have no real
actor as such, so we simply leave the actor as None/NULL. The same
argument also applies to the cover-created and series-created events.
Signed-off-by: Johan Herland <johan@herland.net> Reviewed-by: Stephen Finucane <stephen@that.guru> Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Stephen Finucane [Sat, 30 Nov 2019 17:11:13 +0000 (17:11 +0000)]
templates: Use 'en' locale
As discussed at [1], the UI was originally written in Australian English
but as it's been through a couple of pairs of hands since the chances
are things are more than a little messed up. Just use 'en' as our locale
rather than 'en-US', 'en-AU' or anything else.
Jeremy Cline [Tue, 15 Oct 2019 21:30:11 +0000 (17:30 -0400)]
Allow ordering events by date
By default, the events API orders events by date in descending order
(newest first). However, it's useful to be able to order the events by
oldest events first. For example, when a client is polling the events
API for new events since a given date and wishes to process them in
chronological order.
Signed-off-by: Jeremy Cline <jcline@redhat.com> Reviewed-by: Stephen Finucane <stephen@that.guru>
Stephen Finucane [Sat, 30 Nov 2019 16:24:36 +0000 (16:24 +0000)]
REST: Exclude filters added in later version
If a person requests API version 1.1, they should get the exact same
behavior regardless of the base Patchwork version. We already do this
for fields in the output, so now extend this to filters in the
querystring.
Signed-off-by: Stephen Finucane <stephen@that.guru> Cc: Daniel Axtens <dja@axtens.net>
Daniel Axtens [Wed, 23 Oct 2019 14:33:42 +0000 (01:33 +1100)]
api: support filtering patches by hash
This is a feature that the XML-RPC API has, and which is used in
the wild [1], so support it in the REST API.
I tried to version the new filter field, but it's not at all clear
how to do this with django-filters. The best way I could find
requires manually manipulating request.GET, which seems to defeat
the point of django-filters. So document it for 1.2, and have it
work on older versions as an undocumented feature.
Cc: Konstantin Ryabitsev <konstantin@linuxfoundation.org> Signed-off-by: Daniel Axtens <dja@axtens.net> Acked-by: Konstantin Ryabitsev <konstantin@linuxfoundation.org>
Ali Alnubani [Mon, 21 Oct 2019 15:06:24 +0000 (15:06 +0000)]
docs: Fix note about the required Postfix rights
The permissions for the user running the postfix process are
not the ones used for external file or command delivery by default.
The ones defined by default_privs are (in case the aliases(5) file
that is owned by root was being used). A privileged user or the
postfix owner should not be used in this case.
See http://www.postfix.org/postconf.5.html#default_privs and
local(8).
Signed-off-by: Ali Alnubani <alialnu@mellanox.com> Reviewed-by: Stephen Finucane <stephen@that.guru>
Two issues here. Firstly, the use of the 'USE_I18N'. The Django docs
describe this as such:
A boolean that specifies whether Django’s translation system should
be enabled. This provides an easy way to turn it off, for performance.
If this is set to False, Django will make some optimizations so as not
to load the translation machinery.
We don't do translations and won't until such a time as someone comes
asking for them. Optimize things accordingly by setting 'USE_I18N' to
False and removing the now-unnecessary 'LANGUAGE_CODE' setting.
Secondly, the use of en-AU is a bit of a lie since our UI is actually
written in US English (or should be). The primary reason for a lang tag
to be present is to assist screenreaders and other accessibility tools,
so make their lives easier by reflecting the truth.
Signed-off-by: Stephen Finucane <stephen@that.guru>
When git-request-pull output is pasted into a mail client instead of
mailed directly, the ref part of the pull URL may end up wrapped to the
next line.
Daniel Axtens [Tue, 29 Oct 2019 07:04:14 +0000 (18:04 +1100)]
README: stop trying to track supported versions
We're not doing a good job of it, the versions are out of date and
we keep forgetting to update the README. We are a bit better at
making release notes, so just point people there.
Reviewed-by: Stephen Finucane <stephen@that.guru> Signed-off-by: Daniel Axtens <dja@axtens.net>
Daniel Axtens [Tue, 29 Oct 2019 07:10:44 +0000 (18:10 +1100)]
README: fix .env
The .env setup didn't do GID. It's a bit of a chore to do because
there doesn't seem to be a GID shell variable and because we need
to do a bit more work to get a multi-line thing, but this should
work.
While we're at it, change the docker-compose info, it's hopelessly
out of date.
Reviewed-by: Stephen Finucane <stephen@that.guru> Signed-off-by: Daniel Axtens <dja@axtens.net>
Daniel Axtens [Thu, 24 Oct 2019 03:17:04 +0000 (14:17 +1100)]
travis, tox: only do coverage testing in py27
Currently Travis calls `tox -e coverage` unconditionally. However,
the environment has py27 basepython, so all the runs only generate
py27 coverage!
Rather than try to untangle that, just run the coverage when run
in a py27 travis environment. This makes things faster for no
loss of coverage. It means that codecov has nothing to submit for
the py3x environments, but that's no real loss: it would otherwise
submit lots of duplicate data.
We could try to improve coverage by running coverage for 27 and 3x,
but given that 27 is going away, don't stress at this point.
Daniel Axtens [Wed, 23 Oct 2019 22:12:50 +0000 (09:12 +1100)]
travis: fix codecov
In e017f69376da ("travis: run pep8/flake8 tests"), codecov
was removed from the install step, on the basis that tox-travis
would pull it in automatically.
This, it turns out, isn't entirely true: it is pulled in to the
tox environment, but the data is actually sent in an after_success
step. That is outside the tox environment, as it should be - if it
were part of the tox environment, running `tox -e coverage` on a
developer laptop would try to send data to the web. But, as codecov
now isn't present outside the tox environment, we see in the logs:
$ codecov
codecov: command not found
We don't get any reporting of success/failure in the after_success
step, so we didn't notice.
Restore the installation in the travis environment.
Fixes: e017f69376da ("travis: run pep8/flake8 tests") Signed-off-by: Daniel Axtens <dja@axtens.net>
Andrew Donnellan [Wed, 16 Oct 2019 04:23:07 +0000 (15:23 +1100)]
parser: Unmangle From: headers that have been mangled for DMARC purposes
To avoid triggering spam filters due to failed signature validation, many
mailing lists mangle the From header to change the From address to be the
address of the list, typically where the sender's domain has a strict DMARC
policy enabled.
In this case, we should try to unmangle the From header.
Add support for using the X-Original-From or Reply-To headers, as used by
Google Groups and Mailman respectively, to unmangle the From header when
necessary and associate the patch with the correct submitter based on the
unmangled email address.
When downloading mboxes, rewrite the From header using the unmangled
address, and preserve the original header as X-Patchwork-Original-From in
case someone needs it for some reason. The original From header will still
be stored in the database and exposed via the API, as we want to keep
messages as close to the original received format as possible.
Closes: #64 ("Incorrect submitter when using googlegroups") Reported-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Donnellan <ajd@linux.ibm.com> Tested-by: Daniel Axtens <dja@axtens.net> # mailman only
[dja: add release note] Signed-off-by: Daniel Axtens <dja@axtens.net>
Jeremy Cline [Wed, 9 Oct 2019 19:03:45 +0000 (15:03 -0400)]
Use secrets and fall back to random.SystemRandom for keys
The random module uses the Mersenne Twister pseudorandom number
generator and is not a cryptographically secure random number
generator[0]. The secrets[1] module is intended for generating
cryptographically strong random numbers, so recommend using that to
generate the secret key. It's new in Python 3, so if it's unavailable
fall back to using the ``os.urandom()`` backed implementation of random.
NOTE(stephenfin): Modified to include change to 'config.yaml'. Also
renamed reno to just stick with hyphens for filenames.
Stephen Finucane [Tue, 15 Oct 2019 12:21:41 +0000 (13:21 +0100)]
docker: Rely on caching
It seems less likely that tox and tox-pyenv will change than our
requirements. Split up the 'RUN' steps so we don't have to reinstall the
former every time the latter change.
Signed-off-by: Stephen Finucane <stephen@that.guru>
The usual sub-resources (mbox, raw) hang off those URLs.
The old style URLs (/patch/NNN/*, /cover/NNN/*) redirect appropriately.
I haven't attempted to do anything meaningful with series, and I
have dropped any attempt to provide a generic message-id lookup
or search functionality. One step at a time.
Our database still stores message ids as with angle brackets; we
just work around that rather than trying to migrate. That too can
come later if we think the pain is justified.
Partially-closes: #106 Reported-by: Konstantin Ryabitsev <konstantin@linuxfoundation.org>
Reported-by-but-I-don't-want-to-spam: Linus Torvalds <torvalds@linux-foundation.org> Reported-by: Stephen Finucane <stephen@that.guru> Signed-off-by: Daniel Axtens <dja@axtens.net>
Stephen Finucane [Sat, 21 Sep 2019 18:06:14 +0000 (19:06 +0100)]
Fix issue with delegation of patch via REST API
There have been reports of people being unable to delegate patches to
themselves, despite being a maintainer or the project to which the patch
is associated.
The issue is a result of how we do a check for whether the user is a
maintainer of the patch's project [1]. This check is checking if a given
'User.id' is in the list of items referenced by
'Project.maintainer_project'. However, 'Project.maintainer_project' is a
backref to 'UserProfile.maintainer_projects'. This means we're comparing
'User.id' and 'UserProfile.id'. Boo.
This wasn't seen in testing since we've had a post-save callback [2] for some
time that ensures we always create a 'UserProfile' object whenever we create a
'User' object. This also means we won't have an issue on deployments initially
deployed after that post-save callback was added, a 'User' with id=N will
always have a corresponding 'UserProfile' with id=N. However, that's not true
for older deployments such as the ozlabs.org one.
Daniel Axtens [Wed, 18 Sep 2019 06:17:28 +0000 (16:17 +1000)]
parsearchive, mail: use repr() to get a human readable exception
Currently if we have particular types of error in mail parsing
in parsearchive or parsemail, we print exc.message, which doesn't
always work:
Traceback (most recent call last):
File ".../patchwork/management/commands/parsearchive.py", line 90, in handle
obj = parse_mail(msg, options['list_id'])
File ".../patchwork/parser.py", line 961, in parse_mail
raise ValueError("Missing 'Message-Id' header")
ValueError: Missing 'Message-Id' header
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File ".../django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File ".../django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File ".../django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File ".../django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File ".../patchwork/management/commands/parsearchive.py", line 100, in handle
logger.warning('Invalid mail: %s', exc.message)
AttributeError: 'ValueError' object has no attribute 'message'
repr(exc) will work. Use it.
Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Stephen Finucane <stephen@that.guru>
In commit ab0c443691, we switched from using commit ranges to fixed
ranges. This was a good idea in so far as it ensures we're providing an
application with dependencies that are guaranteed to work. However,
Patchwork as a project isn't active enough to warrant the continued busy
work effort necessary to keep bumping these versions and it's probably
about time to abandon the experiment. However, rather than switching
back to version ranges, use the compatible releases feature introduced
in PEP 440 [1]. This gives us most of the benefits of ranges but with a
nicer syntax.