Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see anything concerning. Mechanistic interpretability research indicates that LLM internals are inherently parallel: many features "light up" in parallel, then strongest ones "win" and contribute to the output.

I'd guess it suggests walking if a feature indicates that the question is so simple it doesn't warrant step-by-step analysis.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: