Why Don't Duplicates In A Relationship Violate A Uniqueconstraint?
Solution 1:
Those three things are all correct. 3) should be qualified: the UniqueConstraint
always works in the sense that your database will never be inconsistent; it just doesn't give you an error unless the relationship you're adding is already flushed.
The fundamental reason this happens is an impedance mismatch between an association table in SQL and its representation in SQLAlchemy. A table in SQL is a multiset of tuples, so with that UNIQUE
constraint, your LinkUserSizeShirtDressSleeve
table is a set of (size_id, user_id)
tuples. On the other hand, the default representation of a relationship in SQLAlchemy an ordered list
of objects, but it imposes some limitations on the way it maintains this list and the way it expects you to interact with this list, so it behaves more like a set
in some ways. In particular, it silently ignores duplicate entries in your association table (if you happen to not have a UNIQUE
constraint), and it assumes that you never add duplicate objects to this list in the first place.
If this is a problem for you, just make the behavior more in line with SQL by using collection_class=set
on your relationship. If you want an error to be raised when you add duplicate entries into the relationship, create a custom collection class based on set
that fails on duplicate adds. In some of my projects, I've resorted to monkey-patching the relationship
constructor to set collection_class=set
on all of my relationships to make this less verbose.
Here's how I would such a custom collection class:
classUniqueSet(set):
defadd(self, el):
if el in self:
raise ValueError("Value already exists")
super().add(el)
Post a Comment for "Why Don't Duplicates In A Relationship Violate A Uniqueconstraint?"