Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are no contextual bias, the goal of the prompt is very explicit and not about probabilistic patterns, but about the models transformer layers dynamically assigning greater weight to words like "meters" (distance) than to other tokens in the prompt.

This should be fixed in the reasoning layer (the inner thoughts or chain-of-thought) were the model should focus on the goal "I Want to Wash My Car" not the distance and assign the correct weight to the tokens.

 help



The point is not that there is bias in promt - What makes the result obvious to OP is their bias - which is different for model and "fixing" it one way is biased.

Why? - It is the same reason that makes 30% of people respond in non-obvious sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: