Hacker News Digest — 2026-02-16-PM
Daily HN summary for February 16, 2026, focusing on the top stories and the themes that dominated discussion.
Themes
- Agents are getting more “structured” interfaces: skills, tool schemas, and browser-integrated action APIs.
- Model progress talk is increasingly about post-training/RL scaling and the hardware reality of running models.
- Ambient identifiers (Bluetooth/Wi‑Fi) enable behavioral inference even without content access.
- Tooling ecosystems (like Ghidra) are absorbing LLMs cautiously: augmentation vs replacing fundamentals.
Qwen3.5: Towards Native Multimodal Agents (https://qwen.ai/blog?id=qwen3.5)
Summary: Qwen positions Qwen3.5 as a step toward “native multimodal agents,” with commenters focusing on RL/post-training scaling and real-world deployment constraints.
- Readers debate whether gains come mainly from broadly scaling RL tasks/environments, or from other training/engineering choices.
- Practical threads on quantization (2–4 bit), MoE routing, KV-cache limits, and what hardware actually matters.
- Some argue “trick question” benchmarks increasingly test routing/optimizations rather than raw reasoning.
14-year-old Miles Wu folded origami pattern that holds 10k times its own weight (https://www.smithsonianmag.com/innovation/this-14-year-old-is-using-origami-to-design-emergency-shelters-that-are-sturdy-cost-efficient-and-easy-to-deploy-180988179/)
Summary: A viral story about extreme compressive strength from origami-like folding geometry sparks debate about attribution, scaling, and whether the shelter framing makes sense.
- Many emphasize years of practice over “14-year-old genius” framing.
- Attribution debate: existing fold vs meaningful optimization/measurement work.
- Practicality skepticism: strong-in-one-direction lab demos vs real-world multidirectional loads and materials.
Ghidra by NSA (https://github.com/NationalSecurityAgency/ghidra)
Summary: The Ghidra reverse-engineering framework remains a cornerstone tool, with lots of interest in extensions and AI-assisted workflows.
- Split between excitement for LLM+Ghidra helpers and warnings about outsourcing the “hard thinking.”
- Many beginner resource recommendations and “how to get started” advice.
- Extension ecosystem gets airtime (e.g., delinking/export workflows) and real-world decompilation anecdotes.
What your Bluetooth devices reveal (https://blog.dmcc.io/journal/2026-bluetooth-privacy-bluehood/)
Summary: A DIY Bluetooth scanner project (“Bluehood”) demonstrates how passive scanning can reveal routines and co-presence patterns with cheap hardware.
- People connect it to similar tracking via Wi‑Fi SSIDs, TPMS sensors, and ALPR/plate capture.
- Debates about how much MAC randomization helps (in-store tracking vs repeat-visit tracking).
- Concern about devices users can’t disable (medical/assistive devices), plus OS-level mitigations.
Show HN: Jemini – Gemini for the Epstein Files (https://jmail.world/jemini)
Summary: An AI interface over Epstein-related archives draws attention mainly for provenance/verification concerns and the risks of hallucinated “facts.”
- Users ask where certain “sponsored/verified” items come from and what verification claims mean.
- Developers emphasize grounding via links to original docs, and discuss redaction tradeoffs.
- Broader skepticism: LLMs aren’t sources; they’re an interface that must be audited against documents.
Privilege is bad grammar (https://tadaima.bearblog.dev/privilege-is-bad-grammar/)
Summary: The post argues that sloppy grammar can be a power signal: once you’re “important,” you don’t need to perform professionalism.
- Lots of countersignaling theory: try-hard vs blend-in vs “so powerful you can ignore norms.”
- AI complicates the signal: perfect grammar is cheap; “human texture” may become a marker (and can also be faked).
- Extended argument about intent vs perception in what counts as a “signal.”
Study: Self-generated Agent Skills are useless (https://arxiv.org/abs/2602.12670)
Summary: SkillsBench reports curated skills can improve agent task success, while “self-generated skills” (written upfront) don’t help on average.
- Main critique: the paper’s “self-generated skills” aren’t post-hoc learning from attempts; they’re pre-task hallucinated docs.
- Skepticism about task realism (single markdown spec + verifier; little exploration or real codebases).
- Many want a missing condition tested: human–AI collaborative skills and iterative updates.
WebMCP Proposal (https://webmachinelearning.github.io/webmcp/)
Summary: WebMCP proposes a browser API for web pages to register structured “tools” agents can call within a live session.
- The blank security/privacy and accessibility sections are treated as unintentionally on-the-nose.
- Debate on whether this duplicates semantic HTML/ARIA or finally avoids brittle DOM automation.
- Prompt-injection and session/cookie access concerns vs permission-model analogies.
Visual Introduction to PyTorch (https://0byte.io/articles/pytorch_introduction.html)
Summary: A visual-first PyTorch introduction walks through tensors, autograd intuition, and a simple model/training loop with practical caveats.
- Praise for distribution histograms to explain
randvsrandnvsempty. - Appreciation for candid results and the reminder that missing features can dominate model performance.
- Request for a follow-up comparing deep nets to XGBoost/LightGBM on the same data.
How to take a photo with scotch tape (lensless imaging) [video] (https://www.youtube.com/watch?v=97f0nfU5Px0)
Summary: A short demo uses scotch tape in a lensless imaging setup, reframing “taking a photo” as a reconstruction/deconvolution inverse problem.
- Commenters highlight its educational value for building intuition about inverse problems.
- One asks whether it’s effectively a pinhole-camera-like setup when deconvolution works.
- Mostly light banter due to the small thread.