I write my blog in Hungarian, but a few entries are available in English, also via RSS

|
I first published the below on Linkedin here on 2025-01-25.
The PKI Consortium held a conference on Post-Quantum Cryptography on Jan 15-16. Let me share some background and my takeaways.
Quantum Computers (QCs) could perform some calculations much faster than any traditional computer. Not a mere thousand times faster, million times or billion times faster, but radically faster ⏩⏩, providing 'efficient' solutions for math problems we cannot hope to solve with today's computers. They are specialized devices, you cannot browse the web or play 🎮 video games on a QC. You are unlikely to ever have one in your home, as they require very special physical conditions to function. However, they are likely to bring major breakthroughs in areas like optimization algorithms or machine learning, so they are researched extensively. They are also likely to reshape cryptography.
A large-scale Quantum Computer (which does not exist yet) shall allow faster attacks against cryptographic algorithms:
-
Symmetric key cryptography algorithms we use today are likely to remain secure 🔒 if used with long keys (e.g. AES-256). Some shorter keys (e.g. 128 bits) will no longer be secure vs QCs (with Grover's algorithm providing a quadratic speedup). This is a major effect, but not earth-shattering.
-
Meanwhile, QCs will have a devastating effect on public key cryptography we use today, as both the RSA and ECC algorithms can be efficiently broken with a QC (with Shor's algorithm yielding exponential speedup). QCs are going to render today's digital signatures and key establishment protocols insecure ⚡ (and thus certificates, PKI and TLS, etc), so today's public key crypto algorithms shall need to be replaced.
Post-Quantum Cryptography (PQC) is about migrating to stronger, quantum-resistant crypto algorithms (which will remain safe in the age of Quantum Computers).
It may take years or decades until a cryptographically relevant QC becomes reality. Technical problems seem solvable, one speaker at the conference suggested that how far QCs are merely depends on how badly people need them and how much money they are willing to spend. The recently announced Willow chip is one step towards scalable QCs, it does not turn anything upside down. QCs are not a direct threat today, and it will take time from the first scalable QC until your adversaries will put their hands on one too. However, preparing for them is not a problem of the far future. There are attackers already collecting and storing encrypted data, hoping they will be able to decrypt them when a QC becomes available. (This is called the 'harvest now, decrypt later' attack ⚡.)
Replacing crypto algorithms is hard. First, the new algorithms need to be researched. Second, they need to be standardized, to allow interoperability. Once standards are ready, devs can create software implementations, but you can only use them, when both/all sides of your protocol support the new algorithm. (I recall migrating TLS to SHA2-based certs: a very large part of the Internet had to support SHA2 before people could even start installing SHA2-based certs on their servers.) Even if you are already using the new algorithms, legacy implementations may still opt to downgrade to the old ones (and attackers will do the same). Once there is a critical mass out there who supports the new algorithms, then you can enforce the new and disable the legacy algorithms -- this is the point when you are secure 🔒.
Past crypto migrations took over a decade 😮, and some speakers even questioned if they ever ended 😄 (yes, SHA1->SHA2 was hard). Still, if you would like to keep your secrets safe for X years, and it would take Y years to migrate to quantum safe algorithms, and quantum computers are likely to arrive in Z<X+Y years, then you are already too late 😵. (See: Mosca's theorem.)
The standardization of PQC algorithms just concluded. NIST ran a process for selecting the new PQC algorithms from 2016, and released standardized quantum-safe public key algorithms in 2024:
- ➡️ FIPS 203 ML-KEM (based on CRYSTALS-Kyber), a lattice-based key encapsulation mechanism. This is the only one that can be used for key establishment (like in TLS), the rest are for signatures.
- ➡️ FIPS 204 ML-DSA (based on CRYSTALS-Dilithium 🖖), a lattice-based signature algorithm. This is intended to be the go-to signature algorithm.
- ➡️ FIPS 205 SLH-DSA (based on SPHINCS+), a stateless hash-based digital signature standard, which is meant to be backup for the previous one. It is not lattice-based but follows a different, more conservative math approach of hash-based signatures (and also unique as its name is not coming from a sci-fi franchise 😜).
- ➕ There is one more signature algorithm (FN-DSA, based on FALCON 🦅), which is going to be standardized in the future.
NIST has also published a timeline (see: NIST IR 8547) for transitioning to the new PQC algorithms, detailing for how long each current/legacy algorithm is usable. The transition is expected to be completed 🏁 in 2035.
My key takeaway was also called out by the NSA presenter: standards are ready, the clock is ticking, time to roll up your sleeves and work on how you get to PQC.
|
Schneier's recent blog post shows an attack, where ChatGPT processes a malicious doc and makes the AI agent execute commands within. We have seen such prompt injection attacks before, what puzzles me is that it states that we still don't know how to defend against these. Some comments under the post even say that there is no way to defend against these, as this is exactly what we are using the agentic AI for, and blocking these attacks would remove the features we use it for. Any given attack can be mitigated, can we do something against these on the long run?
Most 'AI security' work I have seen treats AI as a black box, and secures what is around the AI only (and not the AI itself): filters input to remove malicious prompts, sanity checks output, logs what happens, limits privileges of the AI agent to the necessary (least privilege is always a good idea), governs training data and tweaks prompts (asking the AI not to be evil), etc and these all make sense. However, I can accept it is very difficult to defend even against such basic attacks via 'guardrails' only, and without touching what is inside. I am seeing similar claims on AI security, e.g. some recent posts by Disesdi Susanna Cox.
I have spent considerable time picking up the math behind AI/ML (linear algebra, perceptrons, neural networks, etc), but became just more concerned. I often read that AI/ML mimics how the neurons work in the human brain but we have but little clue on why this mathematical construct works, why and how it 'thinks'. (Sure, we understand how certain hyperparameters impact the way the AI learns or 'thinks'.) What made me concerned was that I met this 'we don't know how it works, it just works' statement in AI math training material. We are experimenting with the unknown, pushing random buttons on a machine we know nothing about; in some cases we get a candy, in others we get electrocuted or our left eyebrow turns purple and starts blinking. We observe, and try to figure out what is going on.
🤖🔒🧙♂️ Most AI security work I have seen resembles more those old Dungeons&Dragons sessions and not what I know about how science or technology should work. "Any sufficiently advanced [magic] is indistinguishable form [fancy technologies we use]...?"
Mankind advances by experimentation, there is inherently nothing wrong with it. However, if we must use it with access to production data (because sometimes we have to), we have to be conscious that it is a black box we have no reason to trust, and all we can do is throw a bunch of guardrails/wrappers around it and 🤞hope the result will be good.
I don't count myself as very knowledgeable in this area, the above just reflects what I understood. -- What do you think?
Do you know of any works on securing the AI itself and not just via guardrails? (If yes, don't spare me, and I am not afraid of reading math.)
This post was first published on Linkedin here on 2025-08-07.
|
This story is hilarious! In a nutshell:
- a lawyer used genAI🤖 for gathering relevant court cases supporting their argument;
- the judge became suspicious🕵️, and found that the cases (4 out of 4) the lawyer quoted did not exists or were very different;
- judge went ballistic😡... (fines, mandatory training, etc.)
Considering genAI as a 'person', coming up with fake precedents, is just unacceptable / unethical / etc. However, genAI is NOT a person.
Meanwhile, looking at genAI merely as 'glorified autocomplete', this may even be normal behaviour. How many times do we use genAI for roleplay🎭, for creating some filler text, some legal hocus-pocus for a movie script? In such contexts, making up some precedents could have been the desired outcome. How could the genAIn know that in this situation, it must refer to a real case. Was this just a simple hallucation👻? Was it a bad prompt, or some misunderstanding? Was this the 'correct' (but unwanted) behaviour for the genAI? -- I do not know.
Of course, submitting important info without checks is always a bad idea. Responsibility cannot be pushed to the AI, the human is responsible.
Btw, this is a great example for a story where it is so easy to be smart -- after the fact...
This post was first published on Linkedin here on 2025-08-02.
|
When building and leading teams in Budapest shared service centers, it is a recurring challenge to teach international management how to pronounce the names of people on the team. Hungarian names may look scary at first, but don't worry! With a little help and inside information (such as this cheat sheet I created), you will be able to confidently pronounce even those truly intimidating Hungarian names, like "Szabolcs". 😀
This is a reprint of my Linkedin post.
|
I was looking for an encryption/decryption tool with a very simple GUI that is available on all platforms (including mobile phones) and uses an open format so that I can decrypt my file via other standard tools too. Another key requirement was that I had to trust the tool.
The main use case was the protection of those few files I do not want to upload to a cloud drive unencrypted.
Having checked and discarded multiple Android apps, I ended up writing a tool on my own. Please find it here:
It is just an HTML file with JavaScript, your browser runs it on the client side only, it does not upload anything anywhere. Actually, it is just a very simple wrapper for the CryptoJS library. Whatever I encrypt can be decrypted with an equivalent OpenSSL command. I used it on files ~10 megabytes, anything bigger should not be handled in a browser.
Feel free to use it! :)