This is mostly for my benefit, since I'm debugging nasty locking problems at the moment, but here's the query you run to list open locks on Postgresql, joined to tell you the name of the table the lock is against.
select pg_class.relname,pg_locks.* from pg_class,pg_locks where pg_class.relfilenode=pg_locks.relation;
This gives you something like the following (some rows elided):
relname | relation | database | transaction | pid | mode | granted |
pg_class | 1259 | 83813 | . | 27461 | AccessShareLock | t |
pg_locks | 16757 | 83813 | . | 27461 | AccessShareLock | t |
os_user_group | 93730 | 83813 | . | 27471 | AccessExclusiveLock | t |
os_user | 93746 | 83813 | . | 27471 | AccessExclusiveLock | f |
os_user | 93746 | 83813 | . | 27423 | AccessShareLock | t |
As you can see here, process 27423 has a shared lock on the os_user table, probably due to some kind of read. Then process 27471 has asked for an exclusive lock, and since the former process is still holding the shared lock, that lock hasn't yet been granted.
The fact that this is causing a deadlock at a higher level, and 27471 will just wait indefinitely for a lock that isn't going to arrive isn't really the database's fault :)