TechBrief — بروزترین اخبار تکنولوژی

TechBrief — تازه‌ترین اخبار فناوری

مرجع روزانه خلاصهٔ اخبار و تحلیل‌های کوتاه از منابع معتبر.

آخرین خبرها

Autoresearch for SAT Solvers

Article URL: https://github.com/iliazintchenko/agent-sat

Comments URL: https://news.ycombinator.com/item?id=47433265

Points: 6

# Comments: 0

Austin’s surge of new housing construction drove down rents

Article URL: https://www.pew.org/en/research-and-analysis/articles/2026/03/18/austins-surge-of-new-housing-construction-drove-down-rents

Comments URL: https://news.ycombinator.com/item?id=47433058

Points: 160

# Comments: 121

Meta is having trouble with rogue AI agents

A rogue AI agent inadvertently exposed Meta company and user data to engineers who didn't have permission to see it.

Sam Altman’s thank-you to coders draws the memes

Altman expresses gratitude for people who knew how to write their code from scratch. The internet replies with salty jokes.

Kagi Translate's AI answers the question "What would horny Margaret Thatcher say?"

Remember when it was fun to play around with LLMs?

What’s on HTTP?

Article URL: https://whatsonhttp.com/

Comments URL: https://news.ycombinator.com/item?id=47431930

Points: 26

# Comments: 6

The FBI is buying Americans’ location data

FBI director Kash Patel admitted that the agency is buying location data that can be used to track people's movements. Unlike information obtained from cell phone providers, this data can be accessed without a warrant - and used to track anyone. "We do purchase commercially available information that's consistent with the Constitution and the laws […]

Musk’s tactic of blaming users for Grok sex images may be foiled by EU law

Planned EU ban on nudify apps would likely force Musk to make Grok less "spicy."

Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

I replicated David Ng's RYS method (https://dnhkng.github.io/posts/rys/) on consumer AMD GPUs (RX 7900 XT + RX 6950 XT) and found something I didn't expect.

Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning pipeline twice. No weights change. No training. The model just thinks longer.

The results on standard benchmarks (lm-evaluation-harness, n=50):

Devstral-24B, layers 12-14 duplicated once: - BBH Logical Deduction: 0.22 → 0.76 - GSM8K (strict): 0.48 → 0.64 - MBPP (code gen): 0.72 → 0.78 - Nothing degraded

Qwen2.5-Coder-32B, layers 7-9 duplicated once: - Reasoning probe: 76% → 94%

The weird part: different duplication patterns create different cognitive "modes" from the same weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling (13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.

The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts. Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B).

Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo. The whole thing — sweep, discovery, validation — took one evening.

Happy to answer questions.


Comments URL: https://news.ycombinator.com/item?id=47431671

Points: 44

# Comments: 7

Coal plant forced to stay open due to emergency order isn't even running

Department of Energy's attempts to prop up coal can look pretty pointless.

دسته‌بندی‌ها

معمولی: گجت‌ها، نرم‌افزار، امنیت، AI، استارتاپ