From Newsgroup: sci.crypt
## In this issue
1. [2023/204] TreePIR: Sublinear-Time and Polylog-Bandwidth ...
2. [2024/883] Low-Latency Linear Transformations with Small Key ...
3. [2025/1299] Sota Voce: Low-Noise Sampling of Sparse Fixed- ...
4. [2025/1597] The State-Test Technique on Differential Attacks: a ...
5. [2025/1599] AVX2 Implementation of QR-UOV for Modern x86 Processors
6. [2025/1824] Coppercloud: Blind Server-Supported RSA Signatures
7. [2025/1899] CoupledNets: Resisting Feature Snooping Attacks on ...
8. [2025/1900] Beholder Signatures
9. [2025/1901] Towards formal verification and corrupted setup ...
10. [2025/1902] Multi-Party Functional Encryption (MPFE): A ...
11. [2025/1903] HyperWolf: Lattice Polynomial Commitments with ...
12. [2025/1904] Predicting Module-Lattice Reduction
13. [2025/1905] Symphony: Scalable SNARKs in the Random Oracle ...
14. [2025/1906] A Simple and Efficient One-Shot Signature Scheme
15. [2025/1908] MPCitH Signature from Restricted Syndrome Decoding
16. [2025/1909] Weak Instances of the Inverse Matrix Code ...
17. [2025/1910] Fast Slicer for Batch-CVP: Making Lattice Hybrid ...
18. [2025/1911] Differential Meet-in-the-Middle Attacks on Feistel ...
19. [2025/1912] Quasar: Sublinear Accumulation Schemes for Multiple ...
20. [2025/1913] Unambiguous SNARGs for P from LWE with Applications ...
21. [2025/1914] A Note on ``Designing Anonymous Signature-Based ...
22. [2025/1915] A Framework for Efficient Quantum Implementations ...
23. [2025/1916] Graeffe-Based Attacks on Poseidon and NTT Lower Bounds
24. [2025/1917] Embedding belief propagation within a multi-task ...
25. [2025/1918] Differential-MITM Attack on 14-round ARADI
26. [2025/1919] UPPR: Universal Privacy-Preserving Revocation
27. [2025/1920] ALFOMs and the Moirai: Quantifying the ...
28. [2025/1921] Multi-Copy Security in Unclonable Cryptography
29. [2025/1922] Guaranteeing a Dishonest PartyrCOs Knowledge (Or: ...
30. [2025/1923] Coil-Based Detection and Concurrent Error ...
31. [2025/1924] Golden: Lightweight Non-Interactive Distributed Key ...
32. [2025/1925] Improved Modeling for Substitution Boxes with ...
33. [2025/1926] Hashing-friendly elliptic curves
34. [2025/1927] Accelerating LWE-Based Post-Quantum Cryptography ...
35. [2025/1928] Optimizing the Post Quantum Signature Scheme CROSS ...
36. [2025/1929] Cryptanalysis of a Post-Quantum Signature Scheme ...
37. [2025/1930] Attention is still what you need: Another Round of ...
38. [2025/1931] Differential-Linear Cryptanalysis of GIFT family ...
39. [2025/1932] Decoding Balanced Linear Codes With Preprocessing
40. [2025/1933] Revisiting Multi-Key Blind Rotation: Optimized ...
41. [2025/1934] HORCRUX - A Lightweight PQC-RISC-V eXtension ...
42. [2025/1935] Fully Homomorphic Encryption for Matrix Arithmetic
43. [2025/1936] Overshooting the Threshold: ($td+n$)-Masking
44. [2025/1937] Noisy Function Secret Sharing and its applications ...
45. [2025/1938] zk-Cookies: Continuous Anonymous Authentication for ...
46. [2025/1939] Efficient Polynomial Multiplication for HQC on ARM ...
47. [2025/1940] GPV Preimage Sampling with Weak Smoothness and Its ...
48. [2025/1941] Adaptively-Secure Three-Round Threshold Schnorr from DL
49. [2025/1942] Privacy-Preserving Shape Matching with Leveled ...
50. [2025/1943] Circuit-Succinct Algebraic Batch Arguments from ...
51. [2025/1944] Architectural Leakage Analysis of Masked ...
52. [2025/1945] So about that Quantum Lattice Thing: Rebuttal to ...
53. [2025/1946] Robust and Scalable Lattice-Based Distributed Key ...
54. [2025/1947] Minicrypt PRFs Do Not Admit Black-Box Oblivious ...
55. [2025/1948] Feedback Lunch: Deep Feedback Codes for Wiretap ...
56. [2025/1949] On the Credibility of Deniable Communication in Court
57. [2025/1950] Fully Adaptive FROST in the Algebraic Group Model ...
58. [2025/1951] What is Cryptography Hiding from Itself?
59. [2025/1952] KPIR-C: Keyword PIR with Arbitrary Server-side ...
## 2023/204
* Title: TreePIR: Sublinear-Time and Polylog-Bandwidth Private Information Retrieval from DDH
* Authors: Arthur Lazzaretti, Charalampos Papamanthou
* [Permalink](
https://eprint.iacr.org/2023/204)
* [Download](
https://eprint.iacr.org/2023/204.pdf)
### Abstract
In Private Information Retrieval (PIR), a client wishes to retrieve the value of an index $i$ from a public database of $N$ values without leaking information about the index $i$. In their recent seminal work, Corrigan-Gibbs and Kogan (EUROCRYPT 2020) introduced the first two-server PIR protocol with sublinear amortized server time and sublinear, $O(\sqrt{N}\log N)$ bandwidth. In a followup work, Shi et al. (CRYPTO 2021) reduced the bandwidth to polylogarithmic by proposing a construction based on privately puncturable pseudorandom functions, a primitive whose only construction known to date is based on heave cryptographic primitives. Partly because of this, their PIR protocol does not achieve concrete efficiency.
In this paper we propose TreePIR, a two-server PIR protocol with sublinear amortized server time and polylogarithmic bandwidth whose security can be based on just the DDH assumption. TreePIR can be partitioned in two phases, both sublinear: The first phase is remarkably simple and only requires pseudorandom generators. The second phase is a single-server PIR protocol on \emph{only} $\sqrt{N}$ indices, for which we can use the protocol by D\"ottling et al. (CRYPTO 2019) based on DDH, or, for practical purposes, the most concretely efficient single-server PIR protocol. Not only does TreePIR achieve better asymptotics than previous approaches while resting on weaker cryptographic assumptions, but it also outperforms existing two-server PIR protocols in practice. The crux of our protocol is a new cryptographic primitive that we call weak privately puncturable pseudorandom functions, which we believe can have further applications.
## 2024/883
* Title: Low-Latency Linear Transformations with Small Key Transmission for Private Neural Network on Homomorphic Encryption
* Authors: Byeong-Seo Min, Joon-Woo Lee
* [Permalink](
https://eprint.iacr.org/2024/883)
* [Download](
https://eprint.iacr.org/2024/883.pdf)
### Abstract
The widespread adoption of Machine Learning as a Service has made data privacy a critical challenge. Homomorphic Encryption (HE) offers a robust solution by enabling computations on encrypted data, yet it faces practical hurdles such as high computational latency and large key sizes. Consequently, designing efficient HE-based AI models is an active area of research. A primary focus is the HE implementation of convolution, a fundamental operation in CNNs and increasingly used in modern architectures such as transformers and state-space models. To realize HE convolution, various packing strategiesrCodefining the data layout within a ciphertextrCohave been introduced, leading to state-of-the-art methods such as multiplexed parallel convolution (MPConv), which is implemented with multiplexed parallel packing. In this paper, we propose Rotation-Optimized Multiplexed Parallel Convolution (RO-MPConv), which enhances usability by reducing the number of rotation operations and rotation keysrComajor contributors to latency and key sizes in MPConv. Additionally, we introduce a small-level key system to further decrease key sizes and a novel parallel baby-step giant-step matrix-vector multiplication, which reduces rotations for packing strategies with multiple identical data entries, including multiplexed parallel packing. We conducted all experiments using the Lattigo
libraryrCOs CKKS HE scheme. Experimental results demonstrate that RO-MPConv achieves up to an 81% reduction in latency and a 29|u reduction in rotation key size compared to MPConv. Furthermore, integrating RO-MPConv into existing state-of-the-art models improved their total latency by up to 26%, and our proposed matrix-vector multiplication method reduced latency by 69%. Our code is fully available from
https://github.com/byeongseomin51/RO-MPConv.git.
## 2025/1299
* Title: Sota Voce: Low-Noise Sampling of Sparse Fixed-Weight Vectors
* Authors: D|-cio Luiz Gazzoni Filho, Gora Adj, Slim Bettaieb, Alessandro Budroni, Jorge Ch|ivez-Saab, Francisco Rodr|!guez-Henr|!quez
* [Permalink](
https://eprint.iacr.org/2025/1299)
* [Download](
https://eprint.iacr.org/2025/1299.pdf)
### Abstract
Many post-quantum cryptosystems require generating an $n$-bit binary vector with a prescribed Hamming weight $\omega$, a process known as \emph{fixed-weight sampling}. When $\omega = O(n)$, we call this \emph{dense} fixed-weight sampling, which commonly appears in lattice-based cryptosystems, like those in the NTRU family. In contrast, code-based cryptosystems typically use \emph{sparse} fixed-weight sampling with $\omega = o(n)$ (e.g., $O(\sqrt{n}$). Sparse fixed-weight sampling generally involves three constant-time steps to keep the sampled vector secret: 1. sample $\omega$ nearly uniform random integers from a series of decreasing intervals; 2. map these integers into a set of $\omega$ distinct indices in $[0, n)$, called the \emph{support}; 3. generate a binary $n$-bit vector with bits set only at the support indices.
Remarkably, some of the core algorithms employed in fixed-weight sampling date back to nearly a century, yet developing efficient and secure techniques remains essential for modern post-quantum cryptographic applications.
In this paper, we present novel algorithms for steps two and three of the fixed-weight sampling process. We demonstrate their practical applicability by replacing the current fixed-weight sampling routine in the HQC post-quantum key exchange mechanism, recently selected for NIST standardization. We rigorously prove that our procedures are sound, secure, and introduce little to no bias. Our implementation of the proposed algorithms accelerates step 2 by up to $2.9\times$ and step 3 by up to $5.8\times$ compared to an optimized version of the fixed-weight sampler currently used in HQC. Since fixed-weight sampling constitutes a significant portion of HQCrCOs execution time, these speedups translate into protocol-level improvements of up to $1.37\times$, $1.28\times$ and $1.21\times$ for key generation, encapsulation and decapsulation, respectively.
## 2025/1597
* Title: The State-Test Technique on Differential Attacks: a 26-Round Attack on CRAFT and Other Applications
* Authors: Dounia M'Foukh, Mar|!a Naya-Plasencia, Patrick Neumann
* [Permalink](
https://eprint.iacr.org/2025/1597)
* [Download](
https://eprint.iacr.org/2025/1597.pdf)
### Abstract
The state-test technique, originally introduced in the context
of impossible-differential cryptanalysis and recently used as an improvement for truncated-differential Meet-in-the-Middle attacks, has proven to be useful for reducing the complexity of attacks. In essence, the idea is to guess parts of the state instead of the key during the key-guessing stage of an attack, ultimately reducing the number of guesses needed.
We generalize the idea of the state-test technique, allowing it to be applied not only to impossible-differential and truncated-)differential Meet-in-the-Middle, but also to differential and differential-linear cryptanalysis, proposing also a new performant technique exploiting the state-test technique and probabilistic key-recovery. Additionally, we provide insights on the interaction between cipher and difference needed for the state-test technique to be applicable, finding it to be a promising option for many ciphers.
To illustrate our findings, we provide 3 new applications of the state-test technique: we show how it can be used to improve the best known attack on the block cipher Pride, how it can be used to improve a step in the best known attack on Serpent, and use it to present the first known attacks on 24, 25 and 26 rounds of CRAFT (out of 32), improving by up to three rounds over the previous best ones.
## 2025/1599
* Title: AVX2 Implementation of QR-UOV for Modern x86 Processors
* Authors: Hiroshi Amagasa, Rei Ueno, Naofumi Homma
* [Permalink](
https://eprint.iacr.org/2025/1599)
* [Download](
https://eprint.iacr.org/2025/1599.pdf)
### Abstract
QR-UOV is a multivariate signature scheme selected as one of the candidates in the second round of the NIST PQC Additional Digital Signatures process. This paper presents software acceleration methods for QR-UOV optimized for modern x86 architectures. QR-UOV operates over small odd prime-power extension fields such as $\mathrm{GF}(31^3)$ and $\mathrm{GF}(127^3)$ unlike other multivariate cryptosystem candidates.
This property allows direct utilization of hardware multipliers for field arithmetic, offering a distinctive advantage for high-performance implementations.
Yet, how to implement QR-UOV efficiently on modern CPUs based on this property remains unclear so far. Our implementation benefits from two proposed optimizations: (1) reducing the computational overhead of the QR-UOV algorithm through algorithm-level optimization, and (2) leveraging advanced SIMD instruction set extensions (e.g., AVX2, AVX512) to accelerate main operations such as matrix multiplication. Our implementation achieves substantial speedups over the Round 2 reference: for the parameter set $(q,\ell)=(127,3)$ at NIST security level I, it delivers a $5.1\times$ improvement in key generation, $3.6\times$ in signature generation, and $5.7\times$ in signature verification. These results demonstrate that QR-UOV achieves performance comparable or higher than that of UOV implementations, particularly at higher security levels.
## 2025/1824
* Title: Coppercloud: Blind Server-Supported RSA Signatures
* Authors: Nikita Snetkov, Mihkel Jaas Karu, Jelizaveta Vakarjuk, Alisa Pankova * [Permalink](
https://eprint.iacr.org/2025/1824)
* [Download](
https://eprint.iacr.org/2025/1824.pdf)
### Abstract
Privacy-preserving technologies are a well-established area of research for the modern digital systems, driven by the regulatory mandates, such as the GDPR, and growing demand for the users' privacy. One example from the different available technologies are blind signatures: a primitive that supports creation of privacy-sensitive applications, starting from secure e-voting and anonymous digital cash to privacy-preserving credentials and identity management systems. In this work, we introduce Coppercloud, a blind server-supported RSA signature scheme designed to enhance privacy in digital identity systems. Coppercloud enables a user to obtain a signature on a message, without revealing its content to the supporting server, while distributing the signing key between the user's device and the supporting server. We formalize the security requirements for blind server-supported signing by defining an ideal functionality, and prove that Coppercloud securely realizes this functionality in the Universal Composability (UC) model.
## 2025/1899
* Title: CoupledNets: Resisting Feature Snooping Attacks on Neural Processing Units through Noise Injection into Models
* Authors: Sachintha Kavishan Jayarathne, Seetal Potluri
* [Permalink](
https://eprint.iacr.org/2025/1899)
* [Download](
https://eprint.iacr.org/2025/1899.pdf)
### Abstract
Feature snooping has been shown to be very effective for stealing cost-sensitive models executing on neural processing units. Existing model obfuscation defenses protect the weights directly, but do not protect the features that hold information on the weights in indirect form. This paper proposes CoupledNets, the first model obfuscation defense that protects the intermediate features during inference. The obfuscation is performed during the training phase, by injecting noise, customized on the theme of neuron coupling, so as to make cryptanalysis mathematically impossible during the inference phase. When implemented across a wide range of neural network architectures and datasets, on average, CoupledNets demonstrated > 80% drop in the accuracy of the obfuscated model, with little impact on the functional accuracy and training times.
## 2025/1900
* Title: Beholder Signatures
* Authors: Stefan Dziembowski, Sebastian Faust, Pawe+e K-Odzior, Marcin Mielniczuk, Susil Kumar Mohanty, Krzysztof Pietrzak
* [Permalink](
https://eprint.iacr.org/2025/1900)
* [Download](
https://eprint.iacr.org/2025/1900.pdf)
### Abstract
We introduce a new primitive, called beholder signatures, which, in some sense, are the opposite of blind signatures. In a beholder signature, one signs a commitment to a (potentially very long) message, and the signature attests that the parties participating in the signing process who know the secret key, jointly also know the entire committed message. This guarantee holds even against distributed adversaries that use secure multi-party computation (MPC) to produce the signature. We work in the distributed adversarial model (Dziembowski, Faust, and Lizurej, Crypto'23), where one assumes that it is infeasible to evaluate a large number of hash queries without any of the participating parties learning the input. We propose a construction of beholder signatures in the random oracle model. The starting point of our construction is proofs of complete knowledge, recently proposed by (Kelkar et al. CCS'24), which again build on Fischlin's transformation of a sigma protocol to a noninteractive, straight-line extractable zero-knowledge proof of knowledge. Our scheme is concretely efficient and comes with a proof-of-concept implementation using Schnorr as the underlying sigma protocol.
The primary applications of beholder signatures can be found within the blockchain ecosystem. In particular, we describe how to use them to construct proofs of custody (Feist, 2021) that do not require ephemeral keys and are noninteractive. We also outline applications to data dissemination, data availability, and proofs of replication.
## 2025/1901
* Title: Towards formal verification and corrupted setup security for the SwissPost voting system
* Authors: Sevdenur Baloglu, Sergiu Bursuc, Reynaldo Gil-Pons, Sjouke Mauw
* [Permalink](
https://eprint.iacr.org/2025/1901)
* [Download](
https://eprint.iacr.org/2025/1901.pdf)
### Abstract
The Swiss Post voting system is one of the most advanced cryptographic voting protocols deployed for political elections, offering end-to-end verifiability and vote privacy. It provides significant documentation and independent scrutiny reports. Still, we argue that two significant pillars of trust need to be further developed. One is formal verification accompanied by machine-checked proofs. The second is security in presence of a corrupt setup component. In this work, we propose formal specifications of a simplified version of the Swiss Post voting protocol and initial verification results with the Tamarin prover. We also propose a revised protocol design that mitigates risks from a corrupt setup, and a prototype implementation of necessary zero-knowledge proofs.
## 2025/1902
* Title: Multi-Party Functional Encryption (MPFE): A powerful tool in the distributed and decentralized world
* Authors: Ruxandra F. Olimid
* [Permalink](
https://eprint.iacr.org/2025/1902)
* [Download](
https://eprint.iacr.org/2025/1902.pdf)
### Abstract
Functional Encryption (FE) is a concept that generalizes public-key encryption, allowing a party that owns a private key to find a function of the plaintext (instead of the plaintext itself). Multi-Party Functional Encryption (MPFE) generalizes this concept and adapts it to multi-party settings, allowing for decentralization in both the ciphertextsrCowhich might originate from multiple sourcesrCoand the keysrCothereby eliminating the necessity of a central authority and avoiding the introduction of a single point of trust and failure. The current paper presents a substantial foundation of MPFE to the non-specialist reader. It provides definitions, classifications, and discusses properties of MPFE and its relation with other cryptographic concepts. The potential applicability of MPFE, which covers multiple domains and use cases, is discussed. The paper investigates the real-world adoption of MPFE, including its presence in technical specifications or its availability in open-source libraries. Finally, the current study discusses challenges and therefore opens up new research directions.
## 2025/1903
* Title: HyperWolf: Lattice Polynomial Commitments with Standard Soundness
* Authors: Lizhen Zhang, Shang Gao, Sherman S. M. Chow, Kurt Pan, Bin Xiao
* [Permalink](
https://eprint.iacr.org/2025/1903)
* [Download](
https://eprint.iacr.org/2025/1903.pdf)
### Abstract
We present $\mathsf{HyperWolf}$, a lattice-based, fully transparent polynomial commitment scheme (PCS) for univariate and multilinear polynomials.
To the best of our knowledge, it is the first lattice PCS to simultaneously achieve logarithmic proof size and verification time with standard soundness under standard lattice assumptions over polynomial~rings.
Building on sublinear schemes such as $\mathsf{Greyhound}$ (CRYPTO'24) and $\mathsf{BrakeDown}$ (CRYPTO'23), we generalize the two-dimensional approach to a $k$-dimensional witness-folding recursion, yielding a $k$-round hyperdimensional proof.
Each round folds the witness along one axis, reducing the tensor arity by one, giving overall cost $O(k N^{1/k})$; choosing $k = \log N$ yields $O(\log N)$ verification time and proof size.
For standard $\ell_2$ soundness,
we give an exact Euclidean-norm proof tailored to lattice relations:
we prove $\langle \vec{f}, \vec{f}\rangle \bmod q$ via an inner-product argument and enforce a small-coefficient bound on $\|\vec{f}\|_\infty$ so that $\langle \vec{f}, \vec{f}\rangle \bmod q = \langle \vec{f}, \vec{f}\rangle$ over $\mathbb{Z}$.
Both sub-proofs admit the same structure for $O(\log N)$ complexity.
We further compact the proof using a proof-of-proof IPA \`{a}~la LaBRADOR (CRYPTO'23), attaining $O(\log\log\log{N})$ while preserving logarithmic verification and linear proving.
We also describe a candidate optimization that achieves $O(\log\log N)$ proofs without LaBRADOR.
For $N = 2^{30}$, $\mathsf{HyperWolf}$ features a ${\sim}53$ KB proof size and, compared to $\mathsf{Greyhound}$, reduces verifier work from $\Theta(\sqrt{N})$ to $\Theta(\log N)$, yielding $2$ to $3$ orders of magnitude improvement for large $N$ while maintaining comparable size.
## 2025/1904
* Title: Predicting Module-Lattice Reduction
* Authors: L|-o Ducas, Lynn Engelberts, Paola de Perthuis
* [Permalink](
https://eprint.iacr.org/2025/1904)
* [Download](
https://eprint.iacr.org/2025/1904.pdf)
### Abstract
Is module-lattice reduction better than unstructured lattice reduction? This question was highlighted as 'Q8' in the Kyber NIST standardization submission (Avanzi et al., 2021), as potentially affecting the concrete security of Kyber and other module-lattice-based schemes. Foundational works on module-lattice reduction (Lee, Pellet-Mary, Stehl|-, and Wallet, ASIACRYPT 2019; Mukherjee and Stephens-Davidowitz, CRYPTO 2020) confirmed the existence of such module variants of LLL and block-reduction algorithms, but focus only on provable worst-case asymptotic behavior.
In this work, we present a concrete average-case analysis of module-lattice reduction. Specifically, we address the question of the expected slope after running module-BKZ, and pinpoint the discriminant $\Delta_K$ of the number field at hand as the main quantity driving this slope. We convert this back into a gain or loss on the blocksize $\beta$: module-BKZ in a number field $K$ of degree $d$ requires an SVP oracle of dimension $\beta + \log(|\Delta_K| / d^d)\beta /(d\log \beta) + o(\beta / \log \beta)$ to reach the same slope as unstructured BKZ with blocksize $\beta$. This asymptotic summary hides further terms that we predict concretely using experimentally verified heuristics. Incidentally, we provide the first open-source implementation of module-BKZ for some cyclotomic fields.
For power-of-two cyclotomic fields, we have $|\Delta_K| = d^d$, and conclude that module-BKZ requires a blocksize larger than its unstructured counterpart by $d-1+o(1)$. On the contrary, for all other cyclotomic fields we have $|\Delta_K| < d^d$, so module-BKZ provides a sublinear $\Theta(\beta/\log \beta)$ gain on the required blocksize, yielding a subexponential speedup of $\exp(\Theta(\beta/\log \beta))$.
## 2025/1905
* Title: Symphony: Scalable SNARKs in the Random Oracle Model from Lattice-Based High-Arity Folding
* Authors: Binyi Chen
* [Permalink](
https://eprint.iacr.org/2025/1905)
* [Download](
https://eprint.iacr.org/2025/1905.pdf)
### Abstract
Folding schemes are a powerful tool for building scalable proof systems. However, existing folding-based SNARKs require embedding hash functions (modeled as random oracles) into SNARK circuits, introducing both security concerns and significant proving overhead.
We re-envision how to use folding, and introduce Symphony, the first folding-based SNARK that avoids embedding hashes in SNARK circuits. It is memory-efficient, parallelizable, streaming-friendly, plausibly post-quantum secure, with polylogarithmic proof size and verification, and a prover dominated by committing to the input witnesses.
As part of our construction, we introduce a new lattice-based folding scheme that compresses a large number of NP-complete statements into one in a single shot, which may be of independent interest. Furthermore, we design a generic compiler that converts a folding scheme into a SNARK without embedding the Fiat-Shamir circuit into proven statements. Our evaluation shows its concrete efficiency, making Symphony a promising candidate for applications such as zkVM, proof of learning, and post-quantum aggregate signatures.
## 2025/1906
* Title: A Simple and Efficient One-Shot Signature Scheme
* Authors: Andrew Huang, Vinod Vaikuntanathan
* [Permalink](
https://eprint.iacr.org/2025/1906)
* [Download](
https://eprint.iacr.org/2025/1906.pdf)
### Abstract
One-shot signatures (OSS) are a powerful and uniquely quantum cryptographic primitive which allows anyone, given common reference string, to come up with a public verification key $\mathsf{pk}$ and a secret signing state $\ket{\mathsf{sk}}$. With the secret signing state, one can produce the signature of any one message, but no more. In a recent breakthrough work, Shmueli and Zhandry (CRYPTO 2025) constructed one-shot signatures, either unconditionally in a classical oracle model or assuming post-quantum indistinguishability obfuscation and the hardness of Learning with Errors (LWE) in the plain model.
In this work, we address the inefficiency of the Shmueli-Zhandry construction which signs messages bit-by-bit, resulting in signing keys of $\Theta(\lambda^4)$ qubits and signatures of size $\Theta(\lambda^3)$ bits for polynomially long messages, where $\lambda$ is the security parameter. We construct a new, simple, direct, and efficient one-shot signature scheme which can sign messages of any polynomial length using signing keys of $\Theta(\lambda^2)$ qubits and signatures of size $\Theta(\lambda^2)$ bits. We achieve corresponding savings in runtimes, in both the oracle model and the plain model. In addition, unlike the Shmueli-Zhandry construction, our scheme achieves perfect correctness.
Our scheme also achieves strong signature incompressibility, which implies a public-key quantum fire scheme with perfect correctness among other applications, correcting an error in a recent work of |cakan, Goyal and Shmueli (QCrypt 2025) and recovering their applications.
## 2025/1908
* Title: MPCitH Signature from Restricted Syndrome Decoding
* Authors: Michele Battagliola, Ethan Chen, Hugo Sauerbier Couv|-e, Violetta Weger
* [Permalink](
https://eprint.iacr.org/2025/1908)
* [Download](
https://eprint.iacr.org/2025/1908.pdf)
### Abstract
Abstract. CROSS is a code-based signature based on the Restricted Syndrome Decoding Problem (R-SDP) that is currently among the fourteen candidates in the NIST standardization process. While CROSS enjoys a very competitive verification time, its primary drawback is its significantly large signature size. In this work, we introduce a new Multi-Party Computation in the Head (MPCitH) protocol for the R-SDP with the primary goal of reducing CROSS signature size. To do so, we design a publicly verifiable secret sharing scheme tailored for restricted vectors and a new multiplicative-to-additive conversion for it. These new cryptographic gadgets may be of independent interest as they can serve as building blocks for future research in multi-party computation, such as a threshold version of CROSS.
## 2025/1909
* Title: Weak Instances of the Inverse Matrix Code Equivalence Problem
* Authors: Jes||s-Javier Chi-Dom|!nguez
* [Permalink](
https://eprint.iacr.org/2025/1909)
* [Download](
https://eprint.iacr.org/2025/1909.pdf)
### Abstract
Nowadays, the Matrix Code Equivalence Problem shows potential applicability in constructing efficient and secure advanced digital signatures, focusing on linkable ring signatures, threshold signatures, and blind signatures. Current constructions of these advanced signatures rely on relaxed instantiations of the Matrix Code Equivalence Problem: given two pairs of equivalent matrix codes, find (if it exists) the secret isometry connecting the pairs. For example, the linkable ring signature construction by Chou et al. (AFRICACRYPT, 2023) builds on top of the Inverse Matrix Code Equivalence Problem: given three equivalent matrix codes, where one pair of the codes is connected by the secret isometry and another by the inverse of that isometry, find the secret isometry.
This paper studies the Inverse Matrix Code Equivalence Problem, focusing on the family of instances where the secret isometry is (skew) symmetric. Our main contribution corresponds to a new algorithm for solving these instances of the Inverse Matrix Code Equivalence Problem. As an implication, we identify weak instances of this kind of instantiation of the Inverse Matrix Code Equivalence Problem, for around 70% of the possible parameter set choices (i.e., code dimension $k$, and code lengths $m$ and $n$), our algorithm runs (heuristically) in polynomial time. In addition, our results spotlight an additional 35% of parameter sets where the best algorithm for solving the Matrix Code Equivalence Problem, proposed by Couvreur and Levrat (Crypto, 2025), does not apply.
Our results have a crucial security impact on the recent blind signature construction proposed by Kuchta, LeGrow, and Persichetti (ePrint IACR, 2025), whose security is closely related to the hardness of solving these kinds of instances of the Inverse Matrix Code Equivalent Problem.
## 2025/1910
* Title: Fast Slicer for Batch-CVP: Making Lattice Hybrid Attacks Practical
* Authors: Alexander Karenin, Elena Kirshanova, Alexander May, Julian Nowakowski
* [Permalink](
https://eprint.iacr.org/2025/1910)
* [Download](
https://eprint.iacr.org/2025/1910.pdf)
### Abstract
We study the practicality of a primal hybrid lattice attack on LWE. Despite significant research efforts, the state-of-the-art in practical LWE record computations so far is the plain primal attack with Kannan's embedding.
Building on an idea by Espitau and Kirchner, Bernstein proposed in 2023 an LWE hybrid attack that asymptotically outperforms the primal attack. In a nutshell, Bernstein's attack enumerates some coordinates of an LWE key and uses the sophisticated Batch-CVP {\em (Randomized) Slicer} algorithm to solve LWE in a dimension-reduced lattice.
The practical implications of this attack however remain widely unclear. One of the major obstacles for judging practicality is the lack of a fast, fully functional Slicer implementation. For the first time, we provide an efficient Slicer implementation that includes all required algorithmic ingredients like locality sensitive hashing.
Building on our Slicer implementation, we implement a generalization of Bernstein's algorithm.
While Bernstein's attack works only for LWE, ours also applies to a more general BDD setting.
Let $(\mathbf{B}, \mathbf{t})$ be a BDD instance, where the target $\mathbf{t}$ is off from the $d$-dimensional lattice $\mathcal{L}(\mathbf{B})$ by some error~$\mathbf{e}$, sampled coordinate-wise from a distribution $\mathcal{D}$.
We show that for hard BDD instances, our BDD hybrid asymptotically speeds up primal's complexity of $T=2^{0.292d + o(d)}$ down to $T^{1-\mathcal{K}}$, where $\mathcal{K} \approx \big(1+\frac{H(\mathcal{D})}{0.058}\big)^{-1}$ with $H(\cdot)$ the Shannon entropy. Depending on $\mathcal{D}$, the constant $\mathcal{K}$ can be small, making practical improvements difficult.
We test our Slicer-based implementation inside an implementation of our BDD hybrid lattice attack to tackle LWE instances.
We choose two ternary LWE secrets with different entropies $H(\mathcal{D})$ as used in NTRU, and the centered binomial distribution as used in Kyber. For all three distributions in all tested LWE dimensions $n \in [160, 210]$, our Slicer-based implementation practically demonstrates measurable speedups over the primal attack, up to a factor of $5$. We also show that for parameters as originally suggested by Regev, the hybrid attack cannot improve over primal.
## 2025/1911
* Title: Differential Meet-in-the-Middle Attacks on Feistel Ciphers
* Authors: Bastien Michel, Dounia M'foukh, Mar|!a Naya-Plasencia
* [Permalink](
https://eprint.iacr.org/2025/1911)
* [Download](
https://eprint.iacr.org/2025/1911.pdf)
### Abstract
Differential meet-in-the-middle attacks, introduced by Boura et al. in 2023, propose a new way of dealing with differential distinguishers. It allows, in particular, to combine differential attacks with initial structures, that were usually used exclusively for meet-in-the-middle attacks. Several applications of this new technique have been published, but so far the results on Feistel constructions have not improved much upon previous best known attacks. In this paper, we apply them on Feistel constructions with all the improvements proposed so far, and we propose some additional new ideas to generically improve these kinds of attacks. We also propose an automatized tool for optimizing the attacks on Simon-like constructions. Our tool outputs a graphical representation of the attack that makes it very easy to verify. All this has allowed us to provide improved single-key key-recovery attacks on most of the variants of Simon, Simeck and CLEFIA-256, that increase the highest number of rounds attacked by 1 or 2 in nearly all the cases.
## 2025/1912
* Title: Quasar: Sublinear Accumulation Schemes for Multiple Instances
* Authors: Tianyu Zheng, Shang Gao, Yu Guo, Bin Xiao
* [Permalink](
https://eprint.iacr.org/2025/1912)
* [Download](
https://eprint.iacr.org/2025/1912.pdf)
### Abstract
Accumulation is a core technique in state-of-the-art Incrementally Verifiable Computations (IVCs), enabling the avoidance of recursively implementing costly SNARK verification within circuits. However, the recursion overhead in existing IVCs remains significant due to the accumulation verifier complexity, which scales linearly with the number of accumulated instances. In this work, we present a novel accumulation scheme for multiple instances based on polynomial commitment schemes, achieving a theoretical verifier complexity that is sublinear in the number of instances. Technically, our scheme leverages partial evaluation of polynomials to replace random linear combinations, thereby minimizing the costly Commitment Random Linear Combination (CRC) operations on the verifier side. Building on this accumulation scheme, we introduce Quasar, a multi-instance IVC with small recursion overhead in practice.
Notably, Quasar reduces the number of costly CRC operations in the recursive circuit from linear to quasi-linear, substantially improving practical performance. By instantiating Quasar with appropriate polynomial commitment schemes, it can achieve linear-time accumulation prover complexity, plausible post-quantum security, and support for parallelizable proving at each step.
## 2025/1913
* Title: Unambiguous SNARGs for P from LWE with Applications to PPAD Hardness
* Authors: Liyan Chen, Cody Freitag, Zhengzhong Jin, Daniel Wichs
* [Permalink](
https://eprint.iacr.org/2025/1913)
* [Download](
https://eprint.iacr.org/2025/1913.pdf)
### Abstract
We construct the first unambiguous succinct non-interactive arguments (SNARGs) for P and incrementally verifiable computation (IVC) for P from the polynomial hardness of learning with errors (LWE). Unambiguity guarantees that it is computationally hard to find two distinct accepting proofs for the same statement.
As an application, we establish the first PPAD hardness result based on the polynomial hardness of LWE combined with a widely believed complexity assumption.
Central to our approach is a new notion of rate-1 witness-unambiguous batch arguments for NP, which we give the first construction from the polynomial hardness of LWE. This notion may be of independent interest.
## 2025/1914
* Title: A Note on ``Designing Anonymous Signature-Based Identity Authentication Scheme for Ocean Multilevel Transmission''
* Authors: Zhengjun Cao, Lihua Liu
* [Permalink](
https://eprint.iacr.org/2025/1914)
* [Download](
https://eprint.iacr.org/2025/1914.pdf)
### Abstract
We show that the authentication scheme (IEEE Internet Things J., 24310-24322, 2024) cannot be practically implemented, because it misused the elliptic curve group law---point multiplication, which is represented as $kP$, where $k$ is an integer and $P$ is a point on an elliptic curve. But the scheme uses the false representation $Q_iQ_j$ to construct verification equations, where $Q_i$ and $Q_j$ are two points. Besides, we find that an adversary can retrieve the target relay device's secret key using the intercepted message via open channels.
## 2025/1915
* Title: A Framework for Efficient Quantum Implementations of Linear Layers
* Authors: Kyungbae Jang, Anubhab Baksi, Hwajeong Seo
* [Permalink](
https://eprint.iacr.org/2025/1915)
* [Download](
https://eprint.iacr.org/2025/1915.pdf)
### Abstract
Quantum depth plays a critical role in determining the performance of quantum implementations, yet quantum programming tools often fail to produce depth-optimal compilations of linear layers. In this work, we present a systematic and automated framework that reorders quantum gate sequences of linear layers to obtain depth-efficient quantum implementations. Our method consistently produces circuits with lower depth compared to prior implementations.
We apply the framework to a range of cryptographic operations, including the AES MixColumn, internal layers of the AES S-box, binary field squaring, and modular reduction in binary field multiplication. In all these cases, our method achieves meaningful reductions in quantum depthrCofor example, lowering the depth of the AES MixColumn and S-box circuits.
This work explores optimal quantum circuit designs for quantum programming tools, improves the accuracy of quantum resource estimation for cryptanalysis, and supports more realistic evaluations of post-quantum security.
## 2025/1916
* Title: Graeffe-Based Attacks on Poseidon and NTT Lower Bounds
* Authors: Ziyu Zhao, Antonio Sanso, Giuseppe Vitto, Jintai Ding
* [Permalink](
https://eprint.iacr.org/2025/1916)
* [Download](
https://eprint.iacr.org/2025/1916.pdf)
### Abstract
Poseidon and Poseidon2 are cryptographic hash functions crafted for efficient zero-knowledge proof systems and have seen wide adoption in practical applications. We introduce the use of the Graeffe transform in univariate polynomial solving within this line of work. The proposed method streamlines the root recovery process in interpolation attacks and achieves several orders of magnitude acceleration in practical settings, enabling a new and more efficient class of attacks against Poseidon targeting round-reduced permutations and constrained input/output instances. We release open-source code and describe our method in detail, demonstrating substantial improvements over prior approaches: reductions in wall time by a factor of $2^{13}$ and in memory usage by a factor of $2^{4.5}$. Memory-access costs for NTTs turn out to be a dominant barrier in practice. And we prove that this cost increases at least as the $4/3$-power of the input size (up to logarithmic factors), which suggests the commonly used pseudo-linear cost model may underestimate the true resource requirements. This behavior contrasts with multivariate equation solving, whose main bottleneck remains finite-field linear algebra. We argue that, when selecting parameters, designers should account for interpolation-based attacks explicitly, since their practical hardness is determined by different, and sometimes stronger, resource constraints than those of multivariate techniques.
## 2025/1917
* Title: Embedding belief propagation within a multi-task learning model : An example on Kyber's NTT
* Authors: Thomas Marquet, Elisabeth Oswald
* [Permalink](
https://eprint.iacr.org/2025/1917)
* [Download](
https://eprint.iacr.org/2025/1917.pdf)
### Abstract
Deep learning has become a powerful tool for profiled side-channel analysis, especially for its ability to defeat masking countermeasures. However, obtaining a successful deep learning model when the attacker cannot access the internal randomness of the profiling device, remains a challenge. The "plateau effect" hinders the training of the model, as the optimization stalls in flat regions of the loss landscape at initialization, which makes the outcome of the training run uncertain. Previous works showed that multi-task learning allows to overcome this problem by leveraging redundant features across multiple tasks, such as shared randomness or common masking flow. We continue the discussion by using belief propagation on a larger graph to guide the learning. We introduce a multi-task learning model that explicitly integrates a factor graph reflecting the algebraic dependencies among intermediates in the computations of Kyber's inverse Number Theoretic Transform (iNTT). Such framework allow the model to learn a joint representation of the related tasks that is mutually beneficial, and provides a mechanism to overcome such plateaus. For the first time, we show that one can perform a belief propagation during training even when one does not have access to the internal randomness, on the masked shares, potentially improving greatly the performances of the attack.
## 2025/1918
* Title: Differential-MITM Attack on 14-round ARADI
* Authors: Shibam Ghosh, Bastien Michel, Mar|!a Naya-Plasencia
* [Permalink](
https://eprint.iacr.org/2025/1918)
* [Download](
https://eprint.iacr.org/2025/1918.pdf)
### Abstract
ARADI is a low-latency block cipher introduced by the U.S. National Security Agency (NSA) for secure and efficient memory encryption applications. In contrast to most ciphers proposed in the academic community, the design rationale for ARADI has not been publicly disclosed, limiting external evaluation to independent cryptanalysis. Several such analyses have already been published, with the most effective attacks to date reaching up to 12 out of 16 rounds. In this work, we present a differential meet-in-the-middle attack on ARADI that incorporates several new optimizations and dedicated techniques, enabling, for the first time, an attack extending to 14 rounds of the cipher.
## 2025/1919
* Title: UPPR: Universal Privacy-Preserving Revocation
* Authors: Leandro Rometsch, Philipp-Florens Lehwalder, Anh-Tu Hoang, Dominik Kaaser, Stefan Schulte
* [Permalink](
https://eprint.iacr.org/2025/1919)
* [Download](
https://eprint.iacr.org/2025/1919.pdf)
### Abstract
Self-Sovereign Identity (SSI) frameworks enable individuals to receive and present digital credentials in a user-controlled way. Revocation mechanisms ensure that invalid or withdrawn credentials cannot be misused. These revocation mechanisms must be scalable (e.g., at national scale) and preserve core SSI principles such as privacy, user control, and interoperability. Achieving both is hard, and finding a suitable trade-off remains a key challenge in SSI research. This paper introduces UPPR, a revocation mechanism for One-Show Verifiable Credentials (oVCs) and unlinkable Anonymous Credentials (ACs). Revocations are managed using per-credential Verifiable Random Function (VRF) tokens, which are published in a Bloom filter cascade on a blockchain. Holders prove non-revocation via a VRF proof for oVCs or a single Zero-Knowledge Proof for ACs. The construction prevents revocation status tracking, allows holders to stay offline, and hides issuer revocation behavior. We analyze the privacy properties of UPPR and provide a prototype implementation on Ethereum. Our implementation enables off-chain verification at no cost. On-chain checks cost 0.56-0.84 USD, while issuers pay only 0.00002-0.00005 USD per credential to refresh the revocation state.
## 2025/1920
* Title: ALFOMs and the Moirai: Quantifying the Performance/Security Tradeoff for ZK-friendly Hash Functions
* Authors: Aur|-lien Boeuf, L|-o Perrin
* [Permalink](
https://eprint.iacr.org/2025/1920)
* [Download](
https://eprint.iacr.org/2025/1920.pdf)
### Abstract
Zero-Knowledge (ZK) protocols rely internally on hash functions for their security arguments. However, the hash functions that are the most efficient in this context differ substantially from e.g. SHA-3: their round function $R$ must enable an efficient arithmetization of its verification. In practice, it means that verifying if $y = R(x)$ involves as little finite field multiplications as possible. In turn, this design requirement implies a greater vulnerability to algebraic attacks. In fact, improvement of those have proved devastating, and imply the need to completely rethink the methods used to ensure security against them. In this paper, we show that it is possible to build a simple yet efficient security argument based on a precise estimate of the so-called rCLalgebraic degreerCY of a system of equations. Furthermore, we show that the increase of this quantity across rounds is tightly connected to the cost of the hash function in two different arithmetizations, namely AIR and R1CS. We precisely quantify this relation by introducing ALgebraic Figures Of Merit (ALFOMs) that capture how efficient a specific primitive (and in fact its round function) are at increasing the security per unit of cost. This new insight allows us to better understand sometimes puzzling performance differences between state-of-the-art hash functions in the R1CS and AIR cases, and to provide a fair and simple comparison of their round functions in this context. Furthermore, we present a new group of round functions we called the Moirai which allow us to explore what a round function providing optimal performance/security tradeoff could look like.
## 2025/1921
* Title: Multi-Copy Security in Unclonable Cryptography
* Authors: Alper |cakan, Vipul Goyal, Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
* [Permalink](
https://eprint.iacr.org/2025/1921)
* [Download](
https://eprint.iacr.org/2025/1921.pdf)
### Abstract
Unclonable cryptography leverages the quantum no-cloning principle to copy-protect cryptographic functionalities. While most existing works address the basic single-copy security, the stronger notion of multi-copy security remains largely unexplored.
We introduce a generic compiler that upgrades collusion-resistant unclonable primitives to achieve multi-copy security, assuming only one-way functions. Using this framework, we obtain the first multi-copy secure constructions of public-key quantum money (termed quantum coins), single-decryptor encryption, unclonable encryption, and more. We also introduce an extended notion of quantum coins, called upgradable quantum coins, which allow weak (almost-public) verification under weaker assumptions and can be upgraded to full public verification under stronger assumptions by the bank simply publishing additional classical information.
Along the way, we give a generic compiler that upgrades single-copy secure single-decryptor encryption to a collusion-resistant one, assuming the existence of functional encryption, and construct the first multi-challenge secure unclonable encryption scheme, which we believe are of independent interest.
## 2025/1922
* Title: Guaranteeing a Dishonest PartyrCOs Knowledge (Or: Setup Requirements for Deniable Authentication)
* Authors: Nils Fleischhacker, Guilherme Rito
* [Permalink](
https://eprint.iacr.org/2025/1922)
* [Download](
https://eprint.iacr.org/2025/1922.pdf)
### Abstract
Deniable authentication is a very desirable guarantee for
secure messaging rCo and in particular Off-The-Record messaging: it allows
a sender Alice to authenticate her messages while retaining plausible deniability on whether she authenticated any of her messages.
While deniable authentication is not achievable in standard Public-Key Infrastructures, Dodis et al. (TCC rCO09) have shown that it is achievable
in so-called Key-Registration with Knowledge setups. These guarantee:
1. dishonest partiesrCO key-pairs are honestly generated; and
2. dishonest parties have access to their secret keys.
However, these are expensive assumptions, and it is unclear how securely realize them in practice.
In this paper we study the weakest setup requirements that are sufficient
to enable deniable authentication. One of our main findings is that
deniable authentication does require an expensive rCo but not necessarily impractical rCo Key-Registration procedure. Roughly, this procedure must guarantee that if a userrCOs key registration is successful, then the user can extract by themselves valid secret keys from their interaction with the registration authority. We show this setup is sufficient by capturing it via
an appropriate security model and then proving the anonymity of a ring signature based on it. On the other hand, we explain why this setup seems inherently necessary by listing a series of attacks that void deniability as soon as a user manages to successfully register a public key for which it
can convincingly claim it does not know a valid (corresponding) secret
key.
The requirements we identify for key-registration protocols are expensive,
but not necessarily impractical. Our second main contribution is showing
how to amortize the per-user Key-Registration cost, which brings deniable authentication guarantees a step closer to practicality.
## 2025/1923
* Title: Coil-Based Detection and Concurrent Error Correction Against EMFI - An Experimental Case-Study on a Prototype ASIC
* Authors: Felix Uhle, Nicolai M|+ller, Thorben Moos, Philipp G|+nther, Amir Moradi
* [Permalink](
https://eprint.iacr.org/2025/1923)
* [Download](
https://eprint.iacr.org/2025/1923.pdf)
### Abstract
Fault injection attacks target cryptographic primitives implemented in
hardware by deliberately inducing faults, leading to abnormal behavior that can be exploited to extract sensitive information. To mitigate these threats, practical and low-overhead countermeasures, whether at the algorithmic or physical level, are essential. However, the real-world effectiveness of these countermeasures remains uncertain, as it is not always clear whether, or to what extent, their underlying security assumptions hold in practice. Therefore, a thorough evaluation under realistic attack scenarios, including recent techniques such as Electromagnetic Fault Injection (EMFI), is crucial. In this work, we demonstrate the resistance of a protected real-world target chip against EMFI. Specifically, our fabricated 65 nm CMOS ASIC employs concurrent error correction based on the techniques described in Impeccable Circuits II as an algorithm-level countermeasure. At the physical level, the chip integrates multiple coils of varying sizes and positions that serve as sensors for electromagnetic fields. We employ a practical and affordable attack setup featuring a commercial fault-injection probe mounted on an XYZ stage for precise positioning over the ASIC. This setup allows us to investigate the effects of various attack parameters, including the proberCOs position, pulse polarity, and voltage level. Our results highlight that the coils serve as a lightweight and effective countermeasure for the practical detection of EMFI attempts. In contrast, for concurrent error correction, a gap between theory and practice emerges: the protection overhead actually makes such designs more susceptible to EMFI in real-world scenarios.
## 2025/1924
* Title: Golden: Lightweight Non-Interactive Distributed Key Generation
* Authors: Benedikt B|+nz, Kevin Choi, Chelsea Komlo
* [Permalink](
https://eprint.iacr.org/2025/1924)
* [Download](
https://eprint.iacr.org/2025/1924.pdf)
### Abstract
In this work, we present Golden, a non-interactive Distributed Key Generation (DKG) protocol. The core innovation of Golden is how it achieves publicly verifiability in a lightweight manner, allowing all participants to non-interactively verify that all other participants followed the protocol correctly. For this reason, Golden can be performed with only one round of (broadcast) communication. Non-interactive DKGs are important for distributed applications; as parties may go offline at any moment, reducing rounds of communication is a desirable feature. Golden outputs Shamir secret shares of a field element sk reeZp to all participants, and a public key PK= g^sk that is a discrete-logarithm commitment to sk. Further, the security of Golden requires only the hardness of discrete-logarithm assumptions, and so can be used over any elliptic curve where these assumptions hold.
Golden is more efficient than prior related schemes in both bandwidth and computation. For 50 participants, Golden requires only 223 kb bandwidth and 13.5 seconds of total runtime for each participant, in comparison to ElGamal-based non-interactive DKG, which requires 27.8 MB bandwidth and 40.5 seconds runtime per participant.
As a building block, we define a new exponent Verifiable Random Function (eVRF). Our eVRF uses a non-interactive key exchange (NIKE) as a building block to derive a Diffie-Hellman shared secret key, and proves correctness of this key with respect to the corresponding Diffie-Hellman public keys. Within Golden, participants use this eVRF in a pairwise manner to generate a one-time pad to encrypt Shamir secret shares to their respective recipients while ensuring public verifiability. As such, Golden avoids the use of public-key encryption schemes such as ElGamal, Paillier, or class groups, departing from prior schemes in the literature. Finally, our eVRF may be of independent interest to settings where publicly-verifiable encryption is desirable.
## 2025/1925
* Title: Improved Modeling for Substitution Boxes with Negative Samples and Beyond (Extended Version)
* Authors: Debranjan Pal, Anubhab Baksi, Surajit Mandal, Santanu Sarkar
* [Permalink](
https://eprint.iacr.org/2025/1925)
* [Download](
https://eprint.iacr.org/2025/1925.pdf)
### Abstract
A common way to perform classical cryptanalysis on symmetric-key ciphers is to encode the problem as an instance of the Mixed Integer Linear Programming (MILP) problem and then run the instance with an efficient solver. For this purpose, it is essential to model the components in a way that is compatible with the MILP formulation while preserving the characteristics of the cipher. In this work, we aim at the problem of efficiently encoding a substitution box (SBox for short). More specifically, we take the evaluation tables, namely, the Difference Distribution Table (DDT) that models for the differential cryptanalysis, the Linear Approximation Table (LAT) that models for the linear cryptanalysis, the Division Property Table (DPT) that models for the integral cryptanalysis, and the Boomerang Connectivity Table (BCT) that models for the boomerang cryptanalysis.
In the current stage, the only model proposed/used in the literature is to look for the non-zero entries in the evaluation table and encode for that (this ultimately leads to the minimum number of active SBoxes for the target cipher). In our first contribution, we experiment with the complementary concept of using the zero entries (i.e., treating the zero entries as nonzero entries and vice versa). In our second contribution, we observe that the top row and column of each table (which are always taken into consideration) are actually redundant, as those are inherently taken care of through additional constraints at another level (which we call the `secondary structure').
We show the efficacy of our proposals by furnishing results on the evaluation tables on multiple SBoxes. In fact, the first proposal reduces the number of constraints for certain SBox evaluation tables (depending on the properties of the table). The second proposal, on the other hand, always reduces the number of constraints from the state-of-the-art results.
## 2025/1926
* Title: Hashing-friendly elliptic curves
* Authors: Dimitri Koshelev
* [Permalink](
https://eprint.iacr.org/2025/1926)
* [Download](
https://eprint.iacr.org/2025/1926.pdf)
### Abstract
This article aims to consider batch hashing to elliptic curves. The given kind of hash functions found numerous applications in elliptic curve cryptography. In practice, a hash-to-curve function is often evaluated at a time by the same entity at many different inputs. It turns out that under certain mild conditions simultaneous evaluation can be carried out several times faster than separate ones. In this regard, the article introduces a new class of elliptic curves over finite fields, more appropriate for multiple hashing to them. Moreover, two explicit hashing-friendly Montgomery/twisted Edwards curves (of $\approx 128$ security bits) have been generated: one of CM discriminant $-7$, i.e., a GLV-friendly curve and one of huge CM discriminant, i.e., a CM-secure curve. The new elliptic curves are intentionally covered by so-called Klein's and Bring's curves of geometric genera $3$ and $4$, respectively. The latter are well studied in various algebraic geometry contexts, although they have not yet been (reasonably) applied in cryptography to the author's knowledge. Such a mathematical complication is justified, since conventional curves (from existing standards or of $j$-invariants $0$, $1728$) are seemingly less efficient for batch hashing.
## 2025/1927
* Title: Accelerating LWE-Based Post-Quantum Cryptography with Approximate Computing
* Authors: Diamante Simone CRESCENZO, Emanuele VALEA, Alberto BOSIO
* [Permalink](
https://eprint.iacr.org/2025/1927)
* [Download](
https://eprint.iacr.org/2025/1927.pdf)
### Abstract
Conventional cryptographic algorithms rely on hard mathematical problems to ensure an appropriate level of security. However, with the advent of quantum computing, classical cryptographic algorithms are expected to become vulnerable. For this reason, Post-Quantum Cryptography (PQC) algorithms have emerged as a response, being designed to resist quantum attacks. Most PQC algorithms rely on the Learning With Errors (LWE) problem, where generating pseudo-random controlled errors is crucial. A well-known solution is the use of hash functions followed by error samplers, implemented according to specific error distributions, whose implementation is challenging. This paper provides a proof of concept demonstrating how Approximate Computing (AxC) can be exploited in LWE-based cryptographic algorithms to alleviate this implementation bottleneck. The main idea is to use AxC circuits to run some of the algorithm's operations, introducing the required error for free thanks to the approximation. Our key contribution is demonstrating how AxC techniques can be effectively applied to LWE-based algorithms, highlighting a novel approach to generating and introducing the error. This concept has proven effective in an approximate implementation of the FrodoKEM algorithm, where we achieve a 50.3% reduction in the need for Gaussian sampling. Additionally, we observe a performance improvement of 2.19%, which further supports the feasibility of this approach. Overall, this work introduces and validates a new design direction for LWE-based cryptography through AxC, opening the way for further research.
## 2025/1928
* Title: Optimizing the Post Quantum Signature Scheme CROSS for Resource Constrained Devices
* Authors: Jonas Schupp, Marco Gianvecchio, Alessandro Barenghi, Patrick Karl, Gerardo Pelosi, Georg Sigl
* [Permalink](
https://eprint.iacr.org/2025/1928)
* [Download](
https://eprint.iacr.org/2025/1928.pdf)
### Abstract
Post-quantum cryptosystems are currently attracting significant research efforts due to the continuous improvements in quantum computing technologies, which led the National Institute of Standards and Technology (NIST) to open standardization competitions to foster proposals and public scrutiny of cryptosystems and digital signatures. Whilst NIST has chosen, after four selection rounds, three digital signature algorithms, it also has opened a new selection process as the chosen candidates were either relying only on lattice-based computationally hard problems, or had unsatisfactory performance figures. In this work, we propose two optimized implementations of the Codes and Restricted Objects Signature Scheme (CROSS) targeting the Cortex-M4 platform. One implementation targets the minimal possible stack size while the other trades some memory space for performance optimization using vectorization for some performance critical arithmetic operations. We show that all parameter sets fit within at maximum 24 kB of stack which corresponds to a reduction by a factor of 15 to 45 with respect to the reference implementation. The memory footprint of our implementation, taking the size of the binary and the signature also into account, is less than 128 kB. We additionally outline different stack reduction options which allow for a fine grained trade-off between memory footprint and performance of the algorithm. Notably, we also show that our memory optimizations alone have no significant impact on the signature verification of CROSS while we even achieve a speed-up factor of up to 1.7 when taking the stack and speed optimizations into account.
## 2025/1929
* Title: Cryptanalysis of a Post-Quantum Signature Scheme Based on Number-Theoretic Assumptions
* Authors: Agha Aghayev, Nour-eddine Rahmani
* [Permalink](
https://eprint.iacr.org/2025/1929)
* [Download](
https://eprint.iacr.org/2025/1929.pdf)
### Abstract
The asymmetric cryptographic constructions upon on number-
theoretic hardness assumptions have become insecure, due to ShorrCOs
quantum algorithm and they will be vulnerable to large scale quantum
computers. Hence, the adaption to quantum-resistant cryptosystems is
a major task. Digital signatures, being a fundamental primitive in nu-
merous applications. Recently, a new approach by Nguyen et al. [9] has
claimed post-quantum security by basing the signature algorithmrCOs se-
curity on a variant of the discrete logarithm problem. In this paper, we present a cryptanalysis of this construction and demonstrate a practi-
cal forgery attack that allows generating an unlimited number of valid signaturesrCowithout access to a signing oracle.
## 2025/1930
* Title: Attention is still what you need: Another Round of Exploring ShouprCOs GGM
* Authors: Taiyu Wang, Cong Zhang, Hong-Sheng Zhou, Xin Wang, Pengfei Chen, Wenli Wang, Kui Ren, Chun Chen
* [Permalink](
https://eprint.iacr.org/2025/1930)
* [Download](
https://eprint.iacr.org/2025/1930.pdf)
### Abstract
The generic group model (GGM) is fundamental for evaluating the feasibility and limitations of group-based cryptosystems. Two prominent versions of the GGM exist in the literature: Shoup's GGM and Maurer's GGM. Zhandry (CRYPTO 2022) points out inherent limitations in Maurer's GGM by demonstrating that several textbook cryptographic primitives, which are provably secure in Shoup's GGM, cannot be proven secure in Maurer's model.
In this work, we further investigate Shoup's GGM and identify novel limitations that have been previously overlooked. Specifically, to prevent generic algorithms from generating valid group elements without querying the oracle, the model typically employs sufficiently large encoding lengths. This leads to sparse encodings, a setting referred to as the sparse generic group model (sparse GGM). We emphasize that this sparseness introduces several constraints:
--Groups with AE and Black-Box Separation: Shoup's GGM is typically instantiated with elliptic curve groups, which admit admissible encodings (AE)rCofunctions mapping from Z_N to elliptic curve points. We establish a black-box separation, showing that the sparse GGM fails to capture cryptographic groups that are both (1) computational Diffie-Hellman (CDH) secure and (2) compatible with admissible encodings.
--Comparison with EC-GGM: We examine the relationship between the sparse GGM and the Elliptic Curve Generic Group Model (EC-GGM) introduced by Groth and Shoup (EUROCRYPT 2022), which inherently yields CDH-secure groups with admissible encodings. Within the framework of indifferentiability, we prove that EC-GGM is strictly stronger than sparse GGM.
--Dense Groups and Black-Box Separation: We revisit groups with dense encodings and establish a black-box separation between CDH-secure dense groups and the sparse GGM.
--Extension to Bilinear Settings: Our results naturally extend to the sparse Generic Bilinear Group Model (GBM), demonstrating that the aforementioned constraints still hold.
In conclusion, our findings indicate that both feasibility and impossibility results in Shoup's GGM should be reinterpreted in a fine-grained manner, encouraging further exploration of cryptographic constructions and black-box separations in EC-GGM or dense GGM.
## 2025/1931
* Title: Differential-Linear Cryptanalysis of GIFT family and GIFT-based Ciphers
* Authors: Shichang Wang, Meicheng Liu, Shiqi Hou, Dongdai Lin
* [Permalink](
https://eprint.iacr.org/2025/1931)
* [Download](
https://eprint.iacr.org/2025/1931.pdf)
### Abstract
At CHES 2017, Banik et al. proposed a lightweight block cipher GIFT consisting of two versions GIFT-64 and GIFT-128. Recently, there are lots of authenticated encryption schemes that adopt GIFT-128 as their underlying primitive, such as GIFT-COFB and HyENA. To promote a comprehensive perception of the soundness of the designs, we evaluate their security against differential-linear cryptanalysis.
For this, automatic tools have been developed to search differential-linear approximation for the ciphers based on S-boxes. With the assistance of the automatic tools, we find 13-round differential-linear approximations for GIFT-COFB and HyENA. Based on the distinguishers, 18-round key-recovery attacks are given for the message processing phase and initialization phase of both ciphers. Moreover, the resistance of GIFT-64/128 against differential-linear cryptanalysis is also evaluated. The 12-round and 17-round differential-linear approximations are found for GIFT-64 and GIFT-128 respectively, which lead to 18-round and 19-round key-recovery attacks respectively. Here, we stress that our attacks do not threaten the security of these ciphers.
## 2025/1932
* Title: Decoding Balanced Linear Codes With Preprocessing
* Authors: Andrej Bogdanov, Rohit Chatterjee, Yunqi Li, Prashant Nalini Vasudevan
* [Permalink](
https://eprint.iacr.org/2025/1932)
* [Download](
https://eprint.iacr.org/2025/1932.pdf)
### Abstract
Prange's information set algorithm is a decoding algorithm for arbitrary linear codes. It decodes corrupted codewords of any $\mathbb{F}_2$-linear code $C$ of message length $n$ up to relative error rate $O(\log n / n)$ in $\mathsf{poly}(n)$ time. We show that the error rate can be improved to $O((\log n)^2 / n)$, provided: (1) the decoder has access to a polynomial-length advice string that depends on $C$ only, and (2) $C$ is $n^{-\Omega(1)}$-balanced.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.
## 2025/1933
* Title: Revisiting Multi-Key Blind Rotation: Optimized NTRU-based Bootstrapping for MKFHE
* Authors: Xiaohan Wan, Mingqiang Wang, Xiaopeng Cheng, Haiyang Xue, Qi Zhang
* [Permalink](
https://eprint.iacr.org/2025/1933)
* [Download](
https://eprint.iacr.org/2025/1933.pdf)
### Abstract
Multi-key fully homomorphic encryption (MKFHE) extends the capability of fully homomorphic encryption by enabling homomorphic computations on ciphertexts encrypted under different keys. Multi-key blind rotation is the core and most computationally intensive component of MKFHE. The NTRU-based multi-key blind rotation proposed by Xiang et al. (ASIACRYPT 2024) has the potential to achieve smaller key sizes, faster blind rotation, and lower noise compared to its RLWE-based counterpart. However, while the multi-key blind rotation in the RLWE-based scheme proposed by Kwak et al. (PKC 2024) remains compatible with their single-key counterpart, the NTRU-based versions do not.
This motivates our work to advance NTRU-based schemes in both efficiency and compatibility. We propose a novel workflow for NTRU-based multi-key blind rotation that achieves compatibility with its single-key counterpart. Our approach significantly reduces both computational and storage complexity compared to the state-of-the-art NTRU-based design, while maintaining a comparable noise magnitude. Building upon this workflow, we further construct two MKFHE schemes for bootstrapping multi-key LWE ciphertexts and multi-key matrix NTRU ciphertexts, both supporting a super-constant number of parties with respect to the ring dimension $N$. Experimental results demonstrate that our method outperforms existing NTRU-based bootstrapping for TFHE-like MKFHE in both computational efficiency and bootstrapping key size. Specifically, our 2-key gate bootstrapping takes only 26ms and requires a bootstrapping key of size 7.34MB, achieving a 3.1|u speedup and a 1.9|u key size reduction compared to prior NTRU-based works.
## 2025/1934
* Title: HORCRUX - A Lightweight PQC-RISC-V eXtension Architecture
* Authors: Alessandra Dolmeta, Valeria Piscopo, Guido Masera, Maurizio Martina, Michael Hutter
* [Permalink](
https://eprint.iacr.org/2025/1934)
* [Download](
https://eprint.iacr.org/2025/1934.pdf)
### Abstract
This work presents a RISC-V extension for Post-Quantum Cryptography
(PQC) called HORCRUX, which provides a unified Instruction-Set Extension (ISE) supporting all NIST-approved PQC algorithms. HORCRUX addresses the current fragmentation in hardware support, where existing extensions typically focus on individual algorithms or limited subsets of PQC schemes, and targets the common kernels shared across ML-KEM, ML-DSA, SLH-DSA, and HQC. To address the primary computational bottlenecks of all these algorithms (namely, modular arithmetic, matrix
multiplication, and hash transformations), the extension introduces new RISC-V instructions executed by a tightly coupled coprocessor. This coprocessor requires only minimal hardware resources, making the architecture efficient for constrained devices while still providing substantial acceleration. An experimental evaluation on a Zynq UltraScale+ FPGA demonstrates speedups of up to 3|u for lattice and
hash-based schemes, and over 30|u for code-based schemes, while adding less than 2,900 LUTs and 400 FFs to the design. The extensionrCOs modular structure maintains backward compatibility with standard RISC-V cores, offering a scalable solution for deploying PQC on highly constrained embedded systems.
## 2025/1935
* Title: Fully Homomorphic Encryption for Matrix Arithmetic
* Authors: Craig Gentry, Yongwoo Lee
* [Permalink](
https://eprint.iacr.org/2025/1935)
* [Download](
https://eprint.iacr.org/2025/1935.pdf)
### Abstract
We propose an efficient fully homomorphic encryption (FHE) scheme tailored for matrix arithmetic based on the Ring-Learning with Errors (RLWE) problem. The proposed scheme naturally supports matrix multiplication, addition, and Hadamard multiplication for batched matrices of various sizes over both complex numbers and integers. Encrypted matrix multiplication is reduced to four matrix multiplications of ciphertext elements, without the need for expensive operations such as slot-to-coefficient conversion or ring switching. In addition, the scheme efficiently supports matrix transformations, including general and conjugate transpositions, as well as matrix rotations: inter-matrix rotations across batched matrices and intra-matrix rotations within rows and columns. Moreover, the proposed FHE scheme can be directly combined with existing bootstrapping algorithms.
By eliminating the need for expensive operations such as repeated slot rotations and conversion between slot- and coefficient-encoding, the proposed construction achieves significant performance improvements. In our construction, encrypted multiplications of $n\times n$ matrices under slot encoding are decomposed into two parts: (1) matrix multiplication rCo four $n\times n$ matrix multiplications of ciphertext coefficients, and (2) key switching rCo with a total cost approximately 2rCo4 times that of Hadamard multiplication. We implemented the proposed scheme and utilized the FLINT library for the matrix multiplication component. Experimental results demonstrate that, even when leveraging highly optimized implementations, matrix multiplication remains the major cost, indicating that our construction substantially reduces auxiliary overheads and achieves strong overall efficiency.
## 2025/1936
* Title: Overshooting the Threshold: ($td+n$)-Masking
* Authors: Vincent Grosso, Carlos Andres Lara-Nino
* [Permalink](
https://eprint.iacr.org/2025/1936)
* [Download](
https://eprint.iacr.org/2025/1936.pdf)
### Abstract
Masking is one of the most widespread countermeasures against side-channel attacks. However, its integration into hardware implementation is subject to physical hazards that can mitigate its security. To counter glitches, the most studied physical hazard, an effective solution is to resynchronize the signals by integrating additional hardware layers into the architecture. However, these solutions have an impact on the performance of the implementation. A solution to avoid these limitations is to use more shares to compute higher-degree functions. We study the cost of this approach, denominated ($td+n$)-masking. We first derive optimal dependence structures for the creation on non-complete sharings, which allow us to obtain efficient implementation of substitution boxes. As a case study, we use these elements to create a second-order masked architecture for the PRESENT cipher. We perform multiple TVLA tests to validate our approach. Our results confirm that the strategy is efficient in terms of performance, at the cost of increased hardware resources.
## 2025/1937
* Title: Noisy Function Secret Sharing and its applications to Differentially Private computations
* Authors: Marc Damie, Federico Mazzone, Florian Hahn, Andreas Peter, Jan Ramon * [Permalink](
https://eprint.iacr.org/2025/1937)
* [Download](
https://eprint.iacr.org/2025/1937.pdf)
### Abstract
Function Secret Sharing (FSS) schemes enable to share secret functions between multiple parties, with notable applications in anonymous communication and privacy-preserving machine learning. While two-party schemes offer logarithmic key sizes, multi-party schemes remain less practical due to significantly larger keys. Although several approaches have been proposed to improve multi-party schemes, a significant efficiency gap remains between the two-party and multi-party settings.
Our work introduces noisy FSS: a relaxation of FSS preserving the standard privacy guarantees but relaxing the correctness definition by allowing a small amount of noise in the output. We formally define noisy FSS and show how the noise introduced by the scheme can be leveraged to provide differential private outputs in statistics applications.
To demonstrate the benefits of this relaxation, we adapt a scheme proposed by Corrigan-Gibbs et al. (S&P'15). While their scheme provides the smallest key sizes among multi-party schemes, they do not support some applications notably in statistics due to their non-linear share decoding. On the contrary, recent works such as Goel et al. (CRYPTO'25) have larger keys, but support all FSS applications. Our noisy adapted scheme offers the best of both worlds by matching the best key sizes, while providing the properties necessary to statistics applications.
## 2025/1938
* Title: zk-Cookies: Continuous Anonymous Authentication for the Web
* Authors: Alexander Frolov, Hal Triedman, Ian Miers
* [Permalink](
https://eprint.iacr.org/2025/1938)
* [Download](
https://eprint.iacr.org/2025/1938.pdf)
### Abstract
We are now entering an era where the large-scale deployment of anonymous credentials seems inevitable, driven both by legislation requiring age verification and the desire to distinguish humans from bots in the face of the proliferation of AI-generated content. However, the widespread deployment of anonymous credentials faces the same security and fraud concerns as existing credentials, but without the established techniques for securing them. For non-anonymous credentials on the web today, authentication is a continuous process in which servers collect large volumes of behavioral data to protect account holders (e.g., by detecting account compromise) or to combat fraudulent behavior.
In this paper, we propose Continuous Anonymous Authentication (CAA) schemes and give a concrete construction and applications for preventing credential sharing and theft. CAA schemes allow us to move the server-side collection, storage, and processing of these behavioral signals to the client while maintaining privacy and integrity. CAA schemes support, on the client side, a number of common behavioral analysis tests and analytics both for determining fraudulent behavior and updating security policies. We implement a prototype, zk-Cookies, which runs in the browser, and supports common behavioral signals such as IP address and geolocation history, browser fingerprinting, and page view history. Using this, we build a prototype application for age verification based on legacy credentials (like passports). We implement these checks efficiently in zk-SNARKs, and also show how to securely implement differentially private behavioral analytics in a zk-SNARK. The simplest version of our construction can perform the computation for an update in under 200 ms.
## 2025/1939
* Title: Efficient Polynomial Multiplication for HQC on ARM Cortex-M4
* Authors: Jihoon Jang, Myeonghoon Lee, Donggeun Kwon, Seokhie Hong, Suhri Kim, Sangjin Lee
* [Permalink](
https://eprint.iacr.org/2025/1939)
* [Download](
https://eprint.iacr.org/2025/1939.pdf)
### Abstract
HQC, a code-based KEM selected by NIST for post-quantum standardization in March 2025, relies on fast polynomial multiplication over $\mathbb{F}_2[x]$ on embedded targets. On ARM Cortex-M4, where carry-less multiply is unavailable, prior work has focused on two approaches, Frobenius Additive FFT (FAFFT) and a radix-16 method that computes $\mathbb{F}_2[x]$ multiplications via 32-bit integer multiplications.
In this paper, we propose improved variants of FAFFT and the radix-16 method that optimize HQC on Cortex-M4. Regarding FAFFT, applying FAFFT to a polynomial length $n=2^{k}+r$ with small $r$, such as hqc-128 and -192, requires operating $2^{k+2}\approx 4n$. To address this overhead, we use FAFFT with 2-Karatsuba. We divide at $2^{k}$, evaluate two subproducts of size $2^k$ with FAFFT at length $2^{k+1}$, and handle the residual of size $r$ via radix-16 multiplication. We further optimize the FAFFT butterfly by shortening XOR sequences that result from fixed-coefficient multiplications expressed as matrix-vector multiplications and by scheduling that fully utilizes all 14 general-purpose registers. In the final 5 layers, we implement bit swaps between registers with SWAPMOVE operations.
For the radix-16 method, we build a cost model based on operation counts to explore Karatsuba and Toom-Cook combinations, producing a small set of optimal candidates that we evaluate on the target device. We compare with the gf2x library using the same algorithm set. For hqc-128 and -192 the resulting combinations are identical, while for hqc-256 our combination achieves 21.7% fewer cycles.
On a NUCLEO-L4R5ZI board with a Cortex-M4 microcontroller, our implementations reduce polynomial-multiplication cycles by 11.3% (hqc-128) and 9.2% (hqc-192) with FAFFT, and by 24.5% (hqc-128) and 3.2% (hqc-192) with the radix-16 method. Overall, we achieve cycle reductions of 16.4%, 15.9%, and 14.7% for key generation, encapsulation, and decapsulation in hqc-128, and 5.8%, 5.8%, and 5.5% in hqc-192.
## 2025/1940
* Title: GPV Preimage Sampling with Weak Smoothness and Its Applications to Lattice Signatures
* Authors: Shiduo Zhang, Huiwen Jia, Delong Ran, Yang Yu, Yu Yu, Xiaoyun Wang
* [Permalink](
https://eprint.iacr.org/2025/1940)
* [Download](
https://eprint.iacr.org/2025/1940.pdf)
### Abstract
The lattice trapdoor associated with Ajtai's function is the cornerstone of many lattice-based cryptosystems.
The current provably secure trapdoor framework, known as the GPV framework, uses a \emph{strong smoothness} condition, i.e. $\epsilon\ll \frac{1}{n^2}$ for smoothing parameter $\eta_{\epsilon}(\mathbb{Z}^{n})$, to ensure the correctness of the security reduction.
In this work, we investigate the feasibility of \emph{weak smoothness}, e.g. $\epsilon = O(\frac{1}{n})$ or even $O(1)$ in the GPV framework and present several positive results.
First, we provide a theoretical security proof for GPV with weak smoothness under a new assumption.
Then, we present Gaussian samplers that are compatible with the weak smoothness condition.
As direct applications, we present two practical GPV signature instantiations based on a weak smoothness condition.
Our first instantiation is a variant of Falcon achieving smaller size and higher security.
The public key sizes are $21\%$ to $28\%$ smaller, and the signature sizes are $23.5\%$ to $29\%$ smaller than Falcon.
We also showcase an NTRU-based GPV signature scheme that employs the Peikert sampler with weak smoothness.
This offers a simple implementation while the security level is greatly lower. Nevertheless, at the NIST-3 security level, our scheme achieves a $49\%$ reduction in size compared to Dilithium-3.
## 2025/1941
* Title: Adaptively-Secure Three-Round Threshold Schnorr from DL
* Authors: Guilhem Niot, Michael Reichle, Kaoru Takemure
* [Permalink](
https://eprint.iacr.org/2025/1941)
* [Download](
https://eprint.iacr.org/2025/1941.pdf)
### Abstract
Threshold signatures are an important tool for trust distribution, and preserving the interface of standardized signatures, such as Schnorr signatures, is crucial for their adoption. In practice, latency dominates end-to-end signing time, so minimizing the number of interaction rounds is critical. Ideally, this is achieved under minimal assumptions and with adaptive security, where the adversary can corrupt signers on-the-fly during the protocol.
While Schnorr signatures are proven secure under the Discrete Logarithm (DL) assumption in the random oracle model, the most round-efficient adaptively-secure threshold Schnorr protocols rely on stronger assumptions such as Decisional Diffie-Hellman (DDH), the Algebraic One-More Discrete Logarithm (AOMDL) or even the Low-Dimensional Vector Representation (LDVR) assumptions. The only adaptively-secure threshold Schnorr signature from the DL assumption requires five rounds, leaving a significant gap in our understanding of this fundamental primitive.
Achieving low-round protocols with adaptive security from the DL assumption has remained an open question.
We resolve this question by presenting the first adaptively-secure threshold Schnorr scheme in three rounds (two online, one offline) in the random oracle model under the DL assumption. Our result demonstrates that achieving both low round complexity and adaptive security is possible while preserving the (so far) minimal assumptions for Schnorr signatures.
To achieve this, we introduce new techniques, including a novel use of an equivocal commitment scheme paired with a simulation-extractable NIZK, and a masking-based aggregated opening strategy for homomorphic commitments. Our work also makes several contributions that might be of independent interest, including a formalization of a strong adaptive security notion, a stronger commitment equivocation property, and an analysis of the simulation-extractability of the randomized Fischlin transformation.
## 2025/1942
* Title: Privacy-Preserving Shape Matching with Leveled Homomorphic Encryption * Authors: Agha Aghayev, Yadigar Imamverdiyev
* [Permalink](
https://eprint.iacr.org/2025/1942)
* [Download](
https://eprint.iacr.org/2025/1942.pdf)
### Abstract
Homomorphic Encryption (HE) allows parties to securely
outsource data while enabling computation on encrypted data, protect-
ing against malicious parties and data leakages. More recent HE schemes
enable approximate arithmetic on complex vectors and approximation of non-linear functions, specifically useful for image processing algorithms.
The Fourier Shape Descriptor (FSD) is a classical method for shape
matching via frequency-domain representation, and we show that FSD
can be computed entirely in the encrypted domain. To the best of our
knowledge, this is the first work to implement secure shape descriptors
and matching via HE. We also present two experiments for similar and
different shapes, and measure the performance of the encrypted algo-
rithm.
## 2025/1943
* Title: Circuit-Succinct Algebraic Batch Arguments from Projective Functional Commitments
* Authors: David Balb|is, Dario Fiore, Russell W. F. Lai
* [Permalink](
https://eprint.iacr.org/2025/1943)
* [Download](
https://eprint.iacr.org/2025/1943.pdf)
### Abstract
Batch arguments for NP (BARGs) are non-interactive proof systems that allow a prover to convince a verifier that $k$ NP statements $x_1, \ldots, x_k$ are valid relative to some circuit $C$, i.e. there exist witnesses $w_i$ such that $(x_i, w_i)$ satisfy $C$ for all $i$, while the proof size remains sublinear in $k$. Most existing BARG constructions achieve a proof size of $|\pi| = poly(\lambda, |C|, \log k)$ for large or not explicitly specified $poly$ acting on $|C|$, with two exceptions:
- Devadas et al. and Paneth and Pass's ''rate-1'' constructions [FOCS'22] achieve $|\pi| = |w| + O(|w|/\lambda) + poly(\lambda, \log k)$ (with matching verification time for Paneth and Pass), but for not explicitly specified $poly$ due to non-black-box use of cryptographic primitives.
- Waters and Wu's algebraic (pairing-based) construction [Crypto'22] and follow-up works achieve $|\pi| = O(\lambda \cdot |C|)$.
In this work, we give the first algebraic (pairing-based) construction of BARG that achieves proof size and online verifier runtime $O(\lambda \cdot |w|)$. We achieve our result by means of a compiler which builds a BARG generically from a projective chainable functional commitment (PCFC), which supports somewhere extraction, subvector projection, and functional openings. We then construct a PCFC from the standard MDDH assumption in bilinear groups by building on top of the functional commitment for circuits by Wee and Wu [Eurocrypt'24]. Our black-box transformation may be of independent interest for understanding the connection between functional commitments and BARGs and towards obtaining other algebraic constructions of the latter.
## 2025/1944
* Title: Architectural Leakage Analysis of Masked Cryptographic Software on RISC-V Cores
* Authors: Siddhartha Chowdhury, Nimish Mishra, Sarani Bhattacharya, Debdeep Mukhopadhyay
* [Permalink](
https://eprint.iacr.org/2025/1944)
* [Download](
https://eprint.iacr.org/2025/1944.pdf)
### Abstract
Software maskingrCoparticularly through threshold implementationsrCohas long been regarded as a foundational defense mechanism against side-channel attacks. These schemes enforce the principles of non-completeness and uniformity, offering provable first-order resistance even under realistic leakage assumptions. However, such assurances were primarily developed under the simplified assumption of scalar or in-order execution, where instruction flow and data dependencies are well-behaved and predictable.
Contemporary processors, by contrast, employ deeply optimized out-of-order (OoO) pipelines that rely on dynamic scheduling, register renaming, operand forwarding, and speculative execution. These aggressive microarchitectural behaviors, while improving performance, also create intricate dataflow interactions that can inadvertently violate the independence of masked shares. As a result, secret shares that are theoretically isolated may recombine transiently through register reuse, speculative reordering, or transition-based leakage in the physical register file. This raises a critical question: to what extent do masking countermeasures remain secure when executed on modern OoO architectures?
To explore this question, we develop a systematic methodology for architectural leakage analysis based on fine-grained execution traces from an OoO RISC-V processor. The analysis correlates microarchitectural events with instruction-level behavior, reconstructs the evolution of physical register assignments, and identifies instances where fundamental masking principles such as non-completeness or uniformity are violated. Through this structured approach, we establish a taxonomy of microarchitectural leakage classes and formalize two dominant categories: (1) Rename-Induced Transition Leakage (RIL), arising from register reuse and transitional dependencies between masked shares; and (2) IEW-Induced Dispatch Leakage, originating from speculative instruction issue and execution reordering.
The methodology is demonstrated across three representative studies:
(i) a masked Toffoli gate, where violations of non-completeness are revealed; (ii) a masked PRESENT cipher, where 571 leakage events are identified due to register-induced transitions; and
(iii) a Probe Isolating Non-Interference (PINI) experiment, which exposes how speculative and dynamic scheduling can compromise isolation guarantees, thereby weakening composability of masked implementations that are provably secure under idealized models.
These results underline a fundamental insight: the security of masking countermeasures cannot be evaluated in abstraction from the hardware on which they execute. Ensuring true resistance to side-channel attacks demands a holistic viewrCoone that jointly considers both the algorithmic soundness of masking constructions and the microarchitectural realities of modern processors.
## 2025/1945
* Title: So about that Quantum Lattice Thing: Rebuttal to "Exact Coset Sampling for Quantum Lattice Algorithms"
* Authors: Daniel Apon
* [Permalink](
https://eprint.iacr.org/2025/1945)
* [Download](
https://eprint.iacr.org/2025/1945.pdf)
### Abstract
In this note we identify major issues with rCLExact Coset Sampling for Quantum Lattice AlgorithmsrCY by Zhang. The issues include:
rCo in the first version, the proposed algorithm requires foreknowledge of the answer to compute
the answer;
rCo in the latest version, the proposed algorithm displays basic misunderstandings of quantum
mechanics.
We believe the existence of these issues refute the authorrCOs claims.
## 2025/1946
* Title: Robust and Scalable Lattice-Based Distributed Key Generation for Asynchronous Networks
* Authors: Linghe Yang, Jian Liu, Jingyi Cui, Guangquan Xu, Mingzi Zuo, Lei Zhang, Zhongshan Li
* [Permalink](
https://eprint.iacr.org/2025/1946)
* [Download](
https://eprint.iacr.org/2025/1946.pdf)
### Abstract
Distributed Key Generation (DKG) is essential for secure, decentralized cryptographic systems, enabling collaborative key pair generation without a trusted authority. This capability underpins critical applications such as threshold signatures and blockchain-based protocols. To achieve post-quantum security, existing robust lattice-based DKG protocols, tailored for synchronous networks, rely on complaint-based Verifiable Secret Sharing (VSS). However, these protocols lack public verifiability and compatibility with asynchronous environments, constraining their use in Byzantine fault-tolerant settings.
This paper presents LADKG, a Lattice-Based Asynchronous Distributed Key Generation framework designed for post-quantum secure and scalable distributed systems. LADKG integrates Asynchronous Verifiable Short Secret Sharing (AV3S) with an Approximate Asynchronous Common Subset (AACS) protocol to achieve efficient key generation. By deferring verification and leveraging deterministic approximate agreement, LADKG reduces computational and communication overhead while maintaining security and robustness. Evaluations on geo-distributed AWS EC2 clusters demonstrate that LADKG is comparable or better than classical Asynchronous Distributed Key Generation (ADKG) schemes in scalability and efficiency. Under optimistic conditions with $n=121$ nodes, completion is achieved in 45 seconds, ensuring robust key generation for post-quantum secure applications.
## 2025/1947
* Title: Minicrypt PRFs Do Not Admit Black-Box Oblivious Evaluations
* Authors: Cruz Barnum, Mohammad Hajiabadi, David Heath, Jake Januzelli, Namar Kumar, Mike Rosulek
* [Permalink](
https://eprint.iacr.org/2025/1947)
* [Download](
https://eprint.iacr.org/2025/1947.pdf)
### Abstract
Oblivious Pseudorandom Function (OPRF) protocols can be categorized into two variants: chosen-key OPRFs -- where the server provides the PRF key $k$ as an input -- and ephemeral-key OPRFs -- where the functionality randomly samples $k$ on behalf of the server. Ephemeral-key OPRFs can be constructed from simple cryptography, such as black-box OT and a random oracle. Chosen-key OPRFs, on the other hand, are only known by employing one of the following (more expensive) approaches:
- Express the underlying PRF as a circuit, then use generic MPC techniques to evaluate it in a non-black-box manner.
- Base the underlying PRF on some specific public-key assumption, such as RSA or DDH, then build a custom protocol that achieves obliviousness by leveraging the PRF's algebraic structure.
Thus, there exists a qualitative gap between known instantiations of these two OPRF variants, in terms of both assumptions and efficiency.
We show that this gap is inherent.
In particular, one might hope for an OPRF with all of the following characteristics:
the protocol
(1) is chosen-key,
(2) supports domains of super-polynomial size, and
(3) is constructed from "simple" cryptography, such as the minimal assumption of OT, plus a random oracle.
We show that no such protocol can exist.
Let $\Pi$ be any chosen-key OPRF protocol defined in terms of a PRF $F$, where $F$ is defined with respect to (only) a random oracle.
This restriction on $F$ rules out the above approaches of either using a PRF in a non-black-box way or basing the underlying PRF itself on public-key cryptography.
While $F$ is restricted in its use of cryptography, the protocol $\Pi$ is not: $\Pi$ may use arbitrary cryptography, e.g., OT, FHE, iO, etc.
We show that each invocation of any such $\Pi$ necessarily leaks information about the server's key $k$.
After a bounded number of queries, an adversarial client can effectively recover $k$, breaking server privacy.
To complement our negative result, we provide a matching positive result: we construct a chosen-key OPRF from black-box OT and RO, where server privacy holds for some bounded number of queries $n$.
This protocol's underlying PRF is constructed from a $(n+1)$-wise independent hash function and RO; the server's key $k$ has length scaling linearly in $n$ which, by our lower bound, is optimal.
Thus, our two results tightly (i.e., up to $\mathrm{poly}(\lambda)$ factors) characterize (im)possibility for chosen-key OPRFs, unless one uses non-black-box cryptography or a public-key-style PRF.
## 2025/1948
* Title: Feedback Lunch: Deep Feedback Codes for Wiretap Channels
* Authors: Yingyao Zhou, Natasha Devroye, Onur G|+nl|+
* [Permalink](
https://eprint.iacr.org/2025/1948)
* [Download](
https://eprint.iacr.org/2025/1948.pdf)
### Abstract
We consider reversely-degraded wiretap channels, for which the secrecy capacity is zero if there is no channel feedback. This work focuses on a seeded modular code design for the Gaussian wiretap channel with channel output feedback, combining universal hash functions for security and learned feedback-based codes for reliability to achieve positive secrecy rates. We study the trade-off between communication reliability and information leakage, illustrating that feedback enables agreeing on a secret key shared between legitimate parties, overcoming the security advantage of the wiretapper. Our findings also motivate code designs for sensing-assisted secure communication, to be used in next-generation integrated sensing and communication methods.
## 2025/1949
* Title: On the Credibility of Deniable Communication in Court
* Authors: Jacob Leiken, Sunoo Park
* [Permalink](
https://eprint.iacr.org/2025/1949)
* [Download](
https://eprint.iacr.org/2025/1949.pdf)
### Abstract
Over time, cryptographically deniable systems have come to be associated in computer-science literature with the idea of "denying" evidence in court rCo specifically, with the ability to convincingly forge evidence in courtroom scenarios, and relatedly, an inability to authenticate evidence in such contexts. Indeed, in some cryptographic models, the ability to falsify mathematically implies the inability to authenticate. Evidentiary processes in courts, however, have been developed over centuries to account for the reality that evidence has always been forgeable, and relies on factors outside of cryptographic models to seek the truth "as well as possible" while acknowledging that all evidence is imperfect. We argue that deniability does not and need not change this paradigm.
Our analysis highlights a gap between technical deniability notions and their application to the real world. There will essentially always be factors outside a cryptographic model that influence perceptions of a message's authenticity, in realistic situations. We propose the broader concept of credibility to capture these factors. The credibility of a system is determined by (1) a threshold of quality that a forgery must pass to be "believable" as an original communication, which varies based on sociotechnical context and threat model, (2) the ease of creating a forgery that passes this threshold, which is also context- and threat-model-dependent, and (3) default system retention policy and retention settings. All three aspects are important for designing secure communication systems for real-world threat models, and some aspects of (2) and (3) may be incorporated directly into technical system design. We hope that our model of credibility will facilitate system design and deployment that addresses threats that are not and cannot be captured by purely technical definitions and existing cryptographic models, and support more nuanced discourse on the strengths and limitations of cryptographic guarantees within specific legal and sociotechnical contexts.
## 2025/1950
* Title: Fully Adaptive FROST in the Algebraic Group Model From Falsifiable Assumptions
* Authors: Ruben Baecker, Paul Gerhart, Davide Li Calsi, Luigi Russo, Dominique Schr||der, Arkady Yerukhimovich
* [Permalink](
https://eprint.iacr.org/2025/1950)
* [Download](
https://eprint.iacr.org/2025/1950.pdf)
### Abstract
We present the first round-optimal Schnorr threshold signature scheme that achieves full adaptive security against algebraic adversaries, relying solely on the Algebraic One-More Discrete Log (AOMDL) assumption.
Our scheme, FaFROST, builds on the FROST framework preserving its two-round signing structure and communication efficiency.
By avoiding binding commitments to partial public keys, FaFROST circumvents the recent impossibility results from CRYPTOrCO25 and requires no reliance on the newly introduced, tailor-made LDVR assumption.
This establishes that round-optimal, adaptively secure Schnorr threshold signatures are achievable under well-established algebraic assumptions.
## 2025/1951
* Title: What is Cryptography Hiding from Itself?
* Authors: Diego F. Aranha, Nikolas Melissaris
* [Permalink](
https://eprint.iacr.org/2025/1951)
* [Download](
https://eprint.iacr.org/2025/1951.pdf)
### Abstract
The European Commission's 2022 proposal for a regulation on child sexual abuse material, popularly labelled ChatControl, obliges online services to detect, report, and remove prohibited content, through client-side scanning.
This paper examines the proposal as a case of undone science in computer security ethics: a domain where technical feasibility and rights-compatibility questions remain systematically underexplored. Combining legal analysis with philosophy of technology, the paper argues that client-side scanning transforms end-to-end encryption from a right to secrecy into a conditional privilege of use. By integrating Isaiah Berlin's concept of negative liberty, Langdon WinnerrCOs account of the politics of artifacts, and David HessrCOs notion of undone science, the analysis traces how design choices become moral constraints.
The discussion situates the European debate within broader concerns about proportionality, epistemic selectivity, and the governance of digital infrastructures. Ultimately, the study shows that the controversy over ChatControl is not only about privacy or child protection but about the epistemic norms that define what counts as legitimate technological knowledge.
## 2025/1952
* Title: KPIR-C: Keyword PIR with Arbitrary Server-side Computation
* Authors: Ali Arastehfard, Weiran Liu, Qixian Zhou, Zinan Shen, Liqiang Peng, Lin Qu, Shuya Feng, Yuan Hong
* [Permalink](
https://eprint.iacr.org/2025/1952)
* [Download](
https://eprint.iacr.org/2025/1952.pdf)
### Abstract
Private Information Retrieval (PIR) enables clients to retrieve data from a server without revealing their query. Keyword PIR (KPIR), an extension for keyword-based queries that enables PIR using keywords, is crucial for privacy-preserving two-party analytics in unbalanced settings. However, existing KPIR solutions face two challenges in efficiently supporting arbitrary server-side computations and handling mismatched queries non-interactively.
To our best knowledge, we take the first step to introduce Keyword PIR with Computation (``KPIR-C''), a novel PIR primitive that enables arbitrary non-interactive computation on responses while preserving query privacy. We overcome the arbitrary computation challenge by introducing TFHE into KPIR, which ensures efficient bootstrapping and allows arbitrary server-side computations. We address the mismatch challenge by identifying an important KPIR-C subroutine, referred to as KPIR with Default (``KPIR-D''), to remove disturbance of the computation caused by the mismatched responses. We instantiate KPIR-C with two constructions, one based on constant-weight codes and the other on recent LWE-based KPIR approaches. Both constructions enable efficient post-computation and offer trade-offs between communication overhead and runtime. Experiments show that our implemented constructions achieve competitive performance, and in some cases even outperform state-of-the-art KPIR solutions that do not support arbitrary computation.
--- Synchronet 3.21a-Linux NewsLink 1.2