I don't see anything concerning. Mechanistic interpretability research indicates that LLM internals are inherently parallel: many features "light up" in parallel, then strongest ones "win" and contribute to the output.
I'd guess it suggests walking if a feature indicates that the question is so simple it doesn't warrant step-by-step analysis.
I'd guess it suggests walking if a feature indicates that the question is so simple it doesn't warrant step-by-step analysis.