Hackerbrief

The top 10 posts on Hacker News summarized daily

Generated: 3/16/2026, 12:53:43 AM

Built by P0u4a

Cannabinoids remove plaque-forming Alzheimer's proteins from brain cells

by anjel | HN thread

Cannabinoids Show Promise Against Alzheimer's Proteins

Scientists at the Salk Institute have uncovered preliminary laboratory evidence suggesting that tetrahydrocannabinol (THC) and other compounds found in marijuana can facilitate the cellular removal of amyloid beta, a toxic protein linked to Alzheimer's disease. These exploratory studies, conducted on human neurons grown in the lab, indicate that cannabinoids may offer insights into the role of inflammation in Alzheimer's and could lead to new therapeutic approaches. Salk Professor David Schubert noted, “Although other studies have offered evidence that cannabinoids might be neuroprotective against the symptoms of Alzheimer’s, we believe our study is the first to demonstrate that cannabinoids affect both inflammation and amyloid beta accumulation in nerve cells.” Alzheimer's disease affects over five million Americans and is projected to triple in incidence over the next 50 years.

Mechanism of Action in Nerve Cells

The Salk team studied nerve cells engineered to produce high levels of amyloid beta, mimicking aspects of Alzheimer's. They observed that elevated amyloid beta levels correlated with cellular inflammation and increased neuron death. Exposing these cells to THC significantly reduced amyloid beta protein levels and eliminated the associated inflammatory response, thereby enhancing nerve cell survival. Antonio Currais, a postdoctoral researcher and first author, explained, “When we were able to identify the molecular basis of the inflammatory response to amyloid beta, it became clear that THC-like compounds that the nerve cells make themselves may be involved in protecting the cells from dying.” Brain cells possess receptors activated by endocannabinoids, lipid molecules naturally produced by the body for intercellular signaling; THC activates these same receptors.

Future Implications and Research

These findings suggest that cannabinoids could play a protective role against the damage associated with Alzheimer's disease, particularly by mitigating inflammation within the brain. While promising, Schubert emphasized that these results are from exploratory laboratory models and that any therapeutic use of THC-like compounds would require rigorous testing in clinical trials. This research was partly informed by separate work in Schubert's lab on an Alzheimer's drug candidate called J147, which also removes amyloid beta and reduces inflammation, leading to the discovery of endocannabinoids' involvement in these processes. npm notice npm notice New major version of npm available! 10.9.4 -> 11.11.1 npm notice Changelog: npm notice To update run: npm install -g npm@11.11.1 npm notice

The Linux Programming Interface as a university course text

by teleforce | HN thread

TLPI's Unexpected Academic Adoption

"The Linux Programming Interface" (TLPI) has found an unexpected role in academia, with numerous university teachers adopting it as a required text or recommended reading for courses on Linux or UNIX system programming. The author notes that the book was not specifically written with the university market in mind.

Author Seeks Feedback for Future Editions

The author is actively seeking detailed information from university teachers currently using TLPI in their courses. This outreach aims to gather insights that will inform and improve a future edition of the book, specifically tailoring it for better use within the academic market. The author expresses keen interest to find out more details about the use of TLPI in university courses.

Key Information Requested from Educators

To facilitate improvements, the author has outlined several specific questions for university teachers. These include the name and URL of their institution, an outline of the course where TLPI is used, the course level (e.g., 3rd or 4th year), the number of enrolled students, and whether TLPI is a required or recommended text. Most importantly, the author is interested in educators' opinions on how could TLPI be improved for use as a university course book.

Excel incorrectly assumes that the year 1900 is a leap year

by susam | HN thread

The 1900 Leap Year Anomaly in Excel

Microsoft Excel incorrectly assumes that the year 1900 was a leap year, despite it not being one. This behavior originated with Lotus 1-2-3, which made this assumption to simplify leap year calculations. When Lotus 1-2-3 was first released, the program assumed that the year 1900 was a leap year, even though it actually was not a leap year. Microsoft Multiplan and Excel later adopted this same serial date system to ensure greater compatibility with Lotus 1-2-3 and facilitate the movement of worksheets between programs.

Why the Behavior Persists

Although technically possible to correct this anomaly in current Excel versions, the disadvantages of doing so significantly outweigh any benefits. Correcting the behavior would cause widespread issues, including almost all dates in existing Excel worksheets and documents decreasing by one day. This shift would demand considerable time and effort to fix, especially in formulas that rely on dates. Furthermore, functions like the WEEKDAY function would return different values, potentially causing formulas to work incorrectly, and serial date compatibility with other programs would be broken.

Limited Impact of Non-Correction

If the behavior remains uncorrected, only one specific problem arises: the WEEKDAY function returns incorrect values for dates prior to March 1, 1900. Because most users do not use dates before March 1, 1900, this problem is rare. It's important to note that Excel correctly handles all other leap years, including century years that are not leap years, such as 2100; only the year 1900 is treated incorrectly.

Show HN: Open-source playground to red-team AI agents with exploits published

by zachdotai | HN thread

Stress-Testing AI Agent Defenses

Fabraix Playground offers a live environment designed to stress-test AI agent defenses through adversarial play, aiming to build trust in these evolving systems. As AI agents increasingly handle repetitive tasks, freeing humans for creative work, ensuring their reliability becomes paramount. The platform emphasizes that trust in AI agents cannot be built in isolation but must be earned collectively, in the open, by a community of researchers, engineers, and the genuinely curious. This open approach allows for pressure-testing systems and sharing findings to foster a deeper understanding of AI security.

The Adversarial Challenge Process

Each challenge within the Playground presents a live AI agent equipped with a specific persona, tools like web search and browsing, and instructions it's designed to protect; the agent's system prompt is fully visible. Participants' objective is to discover methods to bypass these guardrails. The community-driven process involves anyone proposing a challenge scenario, agent, and objective, with the top-voted challenge going live under a ticking clock. The individual achieving the fastest successful "jailbreak" wins, and their winning technique—including approach and reasoning—is published. Every technique we publish advances what the community collectively understands about how AI agents fail — and how to build ones that don't. This published knowledge then drives the development of better defenses, leading to more challenging scenarios and ultimately, deeper insights into AI agent robustness.

Project Architecture and Community Engagement

The Playground's technical foundation includes a React frontend built with TypeScript, Vite, and Tailwind, alongside an open and versioned /challenges directory containing all challenge configurations and system prompts. Guardrail evaluation is conducted server-side to prevent client-side manipulation, and the agent runtime is slated for separate open-sourcing. Fabraix, specializing in runtime security for AI agents, utilizes this Playground to openly stress-test defenses and enable the broader community to contribute to a shared understanding of AI security and its failure modes. Community members are encouraged to get involved by proposing new challenges, suggesting agent capabilities, reporting bugs, or engaging in discussions on Discord. The more people probing these systems, the better the outcomes for everyone building with AI.

Nasdaq's Shame

by imichael | HN thread

Nasdaq's Controversial Index Consultation

Nasdaq has initiated a "consultation" on proposed updates to its Nasdaq-100 Index methodology, which the author characterizes as a plan to "forcefully transfer wealth from the retirement accounts of passive retail investors directly into the pockets of corporate insiders and early investors." Historically, index investing allowed investors to benefit from market price discovery, but now, trillions in passive funds are seen as dictating market structure. The author suggests these changes are a "masterclass in structural market manipulation." When buying and selling are controlled by legislation, the first things to be bought and sold are legislators.

The SpaceX IPO and Rule Bending

The impetus for these rule changes appears to be SpaceX's impending IPO, with a reported target valuation of $1.75 trillion. Nasdaq is allegedly bending its rules to secure this lucrative listing over the NYSE, specifically by accommodating SpaceX's demand for near-immediate index inclusion. This move would also give Nasdaq an advantage for future large IPOs. The proposed "Fast Entry" rule would allow newly listed companies whose market capitalization ranks within the top 40 current constituents to be added to the index after just fifteen trading days, entirely exempt from the standard seasoning and liquidity requirements.

The Diabolical 5x Multiplier

A key proposal is a new approach for "low-float" securities, defined as those below 20% free float. Unlike the S&P 500, which is strictly free-float adjusted, Nasdaq's current methodology includes locked-up insider shares. The proposed "fix" is to mechanically adjust each low-float security's weight to five times its free float percentage, capped at 100%. For example, if SpaceX IPOs at $1.75 trillion with only 5% of shares floated ($87.5 billion), its index weight would be calculated as if 25% of its total market cap ($438 billion) were tradable. This forces passive Nasdaq-100 ETFs and mutual funds to buy allocations based on this inflated weighting on Day 15.

Engineered Liquidity Squeeze and Insider Strategy

This mechanism creates a "massive, artificial supply-and-demand squeeze," as tens of billions of price-insensitive, passive capital are legally mandated to aggressively bid for a restricted float over days. Active traders are expected to front-run this guaranteed demand, forcing passive funds to buy at potentially "insane price[s]." The author argues this corrupts the baseline, establishing an "artificially elevated price floor" fueled by forced buying. You are effectively forcing a firehose of mega-cap index capital through a garden hose of actual liquidity.

Strategic Lock-up Expiration and SpaceX Timing

Nasdaq's rules state float figures are only updated quarterly, and the 5x multiplier drops when float exceeds 20%, upgrading the company to 100% weighting. The author suggests insiders will time lock-up expirations just before a quarterly rebalance, forcing passive funds to "aggressively buy billions of dollars more of the stock the exact moment the insiders are able to flood the market with their unlocked shares." SpaceX is reportedly targeting a mid-June IPO, which aligns with hitting the December 18, 2026 quarterly rebalance, maximizing this effect. The author dismisses the purported reason for the mid-June IPO, "The alignment of Jupiter and Venus," as absurd.

Kangina

by thunderbong | HN thread

Ancient Afghan Fruit Preservation

Kangina, also known as Gangina, is a traditional Afghan technique for preserving fresh fruit, primarily grapes, using airtight discs made from mud and straw. The centuries-old technique is indigenous to Afghanistan's rural center and north, where remote communities... eat kangina-preserved fresh grapes throughout the winter. This method enables communities to consume fresh grapes throughout winter and allows merchants to store and transport grapes for market. Grapes like the thick-skinned Taifi or Kishmishi varieties, harvested later in the season, can remain fresh in these mud vessels for up to six months.

The Kangina Method Explained

The preservation method functions as a form of passive controlled-atmosphere storage, sealing fruit within clay-rich mud to restrict air, moisture, and microbes. Discs are crafted from two sun-baked, bowl-shaped pieces of mud and straw, filled with 1–2 kilograms (2.2–4.4 lb) of un-bruised fruit, and then sealed with more mud. Stored in dry, cool conditions away from direct sunlight, gradual permeation of gas through the clay barrier allows oxygen to enter the container, keeping the grapes alive, while elevated carbon dioxide inside inhibits metabolism and prevents fungal growth. The mud also absorbs excess liquid, preventing bacterial and fungal development.

Effectiveness and Historical Roots

The practice of storing grapes in mud and straw has historical precedents, with records dating back to the 12th century when Sevillan agronomist Ibn al-'Awwam described similar techniques in Andalusia. Kangina vessels are noted for being inexpensive, eco-friendly, and effective for fruit preservation; a 2023 study identified them as among the most effective for grapes, comparable to polystyrene foam boxes. However, the containers are also heavy, unwieldy, and susceptible to moisture absorption.

Show HN: Free OpenAI API Access with ChatGPT Account

by EvanZhouDev | HN thread

Project Overview and Usage

The openai-oauth project enables users to access OpenAI's API by leveraging their existing ChatGPT account, thereby eliminating the need to purchase separate API credits. It offers two primary methods for interaction: an openai-oauth CLI and an openai-oauth-provider designed for the Vercel AI SDK. The CLI establishes a localhost proxy, providing an OpenAI-compatible endpoint at http://127.0.0.1:10531/v1 that can be used as a base URL without requiring an API key.

Technical Details and Limitations

The CLI supports various models, including gpt-5.4 and gpt-5.3-codex, while the provider facilitates integration into AI applications using createOpenAIOAuth(). Both methods share core OAuth transport settings and support /v1/responses, /v1/chat/completions, and /v1/models endpoints. The system functions by utilizing the same OAuth tokens as OpenAI's Codex CLI, which accesses a specific endpoint at chatgpt.com/backend-api/codex/responses to leverage special OpenAI rate limits tied to a user's ChatGPT account. Current limitations include support only for LLMs available through Codex, with model access dependent on the user's Codex plan. The login process is not integrated and requires executing npx @openai/codex login to generate the necessary authentication file. Furthermore, the CLI's /v1/responses endpoint is stateless, necessitating that callers send the complete conversation history with each request.

Important Considerations

This project is an unofficial, community-maintained effort and is not affiliated with, endorsed by, or sponsored by OpenAI, Inc. It relies on the user's local Codex/ChatGPT authentication cache (auth.json), which contains credentials equivalent to a password. Users are strongly advised to “Use only for personal, local experimentation on trusted machines; do not run as a hosted service, do not share access, and do not pool or redistribute tokens.” Additionally, users are “solely responsible for complying with OpenAI’s Terms, policies, and any applicable agreements; misuse may result in rate limits, suspension, or termination.” The project is provided "as is," and users assume all associated risks for data exposure, costs, and account actions.

Canada's bill C-22 mandates mass metadata surveillance of Canadians

by opengrass | HN thread

Bill C-22: A New Phase for Lawful Access Legislation

Bill C-22, the Lawful Access Act, marks a new phase in the decades-long debate over government access to personal information, following the controversial Bill C-2. Last spring, Bill C-2 faced immediate backlash due to its "unprecedented rules permitting widespread warrantless access to personal information," which were on "very shaky constitutional ground" and unlikely to pass constitutional muster. The government subsequently decided to hit the reset button on lawful access, separating the border measures from the lawful access provisions, leading to the introduction of Bill C-22. This new bill addresses two primary aspects of lawful access: law enforcement's ability to access personal data held by communication service providers and the development of surveillance and monitoring capabilities within Canadian networks. The legislation is formally divided into two parts: the first half dealing with "timely access to data and information" and the second establishing the "Supporting Authorized Access to Information Act (SAAIA)."

Improved Data Access, But Oversight Concerns Remain

The "timely access to data and information" section of Bill C-22 shows considerable improvement over its predecessor, Bill C-2, which had an "astonishing" breadth. The earlier iteration targeted any service provider in Canada, including physicians and lawyers, for warrantless disclosure of personal information, directly contradicting recent Supreme Court of Canada jurisprudence. Bill C-22 now introduces a new "confirmation of service" demand power, allowing law enforcement to demand that telecom providers (not any service provider) confirm whether they provide service to a particular person. Access to other subscriber information will now be subject to a new production order, which must be reviewed and approved by a judge, addressing a longstanding police complaint that they may do considerable work seeking information about a subscriber only to learn the person isn’t a customer. The government has significantly limited the scope of warrantless information demand powers, now focusing solely on telecommunications providers and whether they provide service to a particular individual. While this shift towards judicial oversight for more personal data is a major concession, acknowledging Bill C-2's overly broad and privacy-invasive nature, concerns persist regarding the low "reasonable grounds to suspect" standard envisioned for these production orders.

Broadened Surveillance Powers Under SAAIA

Despite improvements in data access, the SAAIA component of Bill C-22 raises significant privacy and civil liberties concerns, largely mirroring or even expanding upon the problematic elements of Bill C-2. The SAAIA establishes new requirements for "electronic service providers" to actively work with law enforcement on surveillance and monitoring capabilities. This term is broadly defined as a person that... provides an electronic service... to persons in Canada; or carries on all or part of its business activities in Canada, explicitly extending beyond traditional telecom and Internet providers to include major international Internet platforms like Google and Meta, which are now key players in electronic communications (e.g., Gmail or WhatsApp). An "electronic service" itself is defined as "a service, or a feature of a service, that involves the creation, recording, storage, processing, transmission, reception, emission or making available of information in electronic, digital or any other intangible form by an electronic, digital, magnetic, optical, biometric, acoustic or other technological means, or a combination of any such means." All electronic service providers are obligated to "provide all reasonable assistance, in any prescribed time and manner, to permit the assessment or testing of any device, equipment or other thing that may enable an authorized person to access information" and are required to keep such requests secret, preventing public scrutiny.

Expanded Metadata Retention and Security Risks

Beyond these basic obligations, the SAAIA identifies "core providers" who will be subject to additional, more stringent regulations. These may include requirements for the development, implementation, assessment, testing, and maintenance of operational and technical capabilities for extracting and organizing authorized information, as well as the installation, use, operation, management, assessment, testing, and maintenance of any device, equipment or other thing that may enable an authorized person to access information. Core providers may also be required to provide notices to the Minister or other persons regarding these capabilities and devices. Crucially, the bill introduces a new requirement for core providers to retain "categories of metadata — including transmission data, as defined in section 487.011 of the Criminal Code — for reasonable periods of time not exceeding one year," a significant expansion not present in Bill C-2. While the bill specifies limits, prohibiting the retention of content, web browsing history, or social media activities, and includes an exception for systemic vulnerabilities, critics argue these safeguards are insufficient. Concerns remain that networks could be made less secure by virtue of these rules, with changes kept secret from the public, hindering transparency and accountability. Furthermore, many of these rules appear geared towards global information sharing, including compliance with the Second Additional Protocol to the Budapest Convention (2AP) and the CLOUD Act, raising questions about data sovereignty and privacy across borders. The SAAIA... envisions a significant change to how government agencies interact with Canadian communications networks and network providers raising enormous privacy and civil liberties concerns. This section of the bill, despite increased oversight from the Intelligence Commissioner, continues to pose serious risks regarding surveillance capabilities, security vulnerabilities, secrecy, and cross-border data sharing.

LLMs can be exhausting

by tjohnell | HN thread

The Exhaustion of LLM Work

Working with Large Language Models (LLMs) like Claude or Codex can be mentally exhausting, often leading to unproductive sessions. This fatigue frequently stems from the user's own mental state and inefficient workflows, rather than inherent model limitations. As a user becomes tired, prompt quality degrades, resulting in poorer AI performance. For instance, interrupting an LLM after realizing key context was missed leads to suboptimal outcomes; Without a doubt, interrupting Claude Code or "steering" in Codex leads to worse outcomes.

Avoiding the "Doom-Loop Psychosis"

Slow feedback loops and excessive context usage further exacerbate the problem. Tasks requiring large file parsing, where every tweak necessitates a slow re-parsing, can feel like a "slot machine that takes 10 minutes to spin," quickly filling the LLM's context window and diminishing its effectiveness. To avoid this "doom-loop psychosis," it's crucial to recognize when mental fatigue sets in. If I reach the point where I am not getting joy out of writing a great prompt, then it's time to throw in the towel. Metacognition is vital to ensure prompts are well-thought-out, preventing the seductive trap of hoping the AI will fill in undefined requirements.

Optimizing Feedback Loops

When faced with slow processes, the solution is to make the feedback loop speed itself the problem to solve. By explicitly instructing the LLM to achieve a "sub 5-minute loop" for reproducing failure cases, similar to Test-Driven Development (TDD) principles, the AI can help optimize its own code path for faster iteration. This approach not only reproduces the problem but also creates levers for a quicker feedback cycle, consuming less context and leading to a "smarter" AI, which can save hours of debugging time.