• CRYPTO-GRAM, December 15, 2025 Part10

    From TCOB1 Security Posts@21:1/229 to All on Mon Dec 15 12:31:27 2025
    r, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.

    Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person's data. And there are also write attacks, where adversaries add to or change a user's data. Defending against both is critical; this all implies a complex and robust authentication system.

    Sixth, and finally, it must be easy to use. If we're envisioning digital personal assistants for everybody, it can't require specialized security training to use properly.

    I'm not the first to suggest something like this. Researchers have proposed a "Human Context Protocol" (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee's Solid protocol for distributed data ownership.

    The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can't ignore one or the other.

    Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can't easily manipulate you because you see what data it's using and can correct it. It can't easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it's the only way we can have trustworthy AI assistants.

    This essay was originally published in IEEE Security & Privacy.

    ** *** ***** ******* *********** *************

    Upcoming Speaking Engagements

    [2025.12.14] This is a current list of where and when I am scheduled to speak:

    I'm speaking and signing books at the Chicago Public Library in Chicago, Illinois, USA, at 6:00 PM CT on February 5, 2026. Details to come.
    I'm speaking at Capricon 44 in Chicago, Illinois, USA. The convention runs February 5-8, 2026. My speaking time is TBD.
    I'm speaking at the Munich Cybersecurity Conference in Munich, Germany on February 12, 2026.
    I'm speaking at Tech Live: Cybersecurity in New York City, USA on March 11, 2026.
    I'm giving the Ross Anderson Lecture at the University of Cambridge's Churchill College on March 19, 2026.
    I'm speaking at RSAC 2026 in San Francisco, California, USA on March 25, 2026. The list is maintained on this page.

    ** *** ***** ******* *********** *************

    Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page.

    You can also read these articles on my blog, Schneier on Security.

    Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

    Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books -- including his latest, A Hacker's Mind -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

    Copyright (C) 2025 by Bruce Schneier.
    --- FMail-lnx 2.3.1.0
    * Origin: TCOB1 A Mail Only System (21:1/229)