Take the house allocation problem with TTC.
Imagine an agent that can see everyone’s preferences and the rest of the agents As a result, he can predict how the house allocation will be.
Can you think of an example in which the agent is in a better situation when he lies about his preferences than when he tells the truth?
For example, a situation in which if the agent tells the truth, the trading cycles among individuals will make him end up with his last option, but if he lies in its preferences, if he points out a house which is not his best ranked option, the trading cycles will leave him in a better situation.
Can you think of a numerical example like this?
If not, what are the reasons of why we can’t create it? What changes or modifications to the model would there need to exist?