[continued from previous message]
The work of Asharov et al. (WWW'17) addresses this problem by designing secure solutions for centrality measures that involve computing the truncated Katz score and reach score on multilayer graphs. However, we identify several limitations in that work
which render the solution inefficient or even unfeasible for realistic networks with significantly more than 10k nodes. We address these limitations by designing secure solutions that are significantly more efficient and scalable. In more detail, given
that real-world graphs are known to be sparse, our solutions move away from an expensive matrix-based representation to a more efficient list-based representation. We design novel, secure, and efficient solutions for computing centrality measures and
prove their correctness. Our solutions drastically reduce the asymptotic complexity from the prohibitive $\mathcal{O}(|\mathsf{V}|^2)$ even for the fastest solution by Asharov et al. down to $\mathcal{O}(|\mathsf{V}|\log |\mathsf{V}|)$, for $|\mathsf{V}|$
nodes. To design our solutions, we extend upon the secure graph computation framework of Koti et al. (CCS'24), providing a novel framework with improved capabilities in multiple directions. Finally, we provide an end-to-end implementation of our secure
graph analysis framework and establish concrete efficiency improvements over prior work, observing several orders of magnitude improvement.
## 2025/653
* Title: Fission: Distributed Privacy-Preserving Large Language Model Inference * Authors: Mehmet Ugurbil, Dimitris Mouris, Manuel B. Santos, José Cabrero-Holgueras, Miguel de Vega, Shubho Sengupta
* [Permalink](
https://eprint.iacr.org/2025/653)
* [Download](
https://eprint.iacr.org/2025/653.pdf)
### Abstract
The increased popularity of large language models (LLMs) raises serious privacy concerns, where users' private queries are sent to untrusted servers. Many cryptographic techniques have been proposed to provide privacy, such as secure multiparty
computation (MPC), which enables the evaluation of LLMs directly on private data. However, cryptographic techniques have been deemed impractical as they introduce large communication and computation. On the other hand, many obfuscation techniques have
been proposed, such as split inference, where part of the model is evaluated on edge devices to hide the input data from untrusted servers, but these methods provide limited privacy guarantees.
We propose Fission, a privacy-preserving framework that improves latency while providing strong privacy guarantees. Fission utilizes an MPC network for linear computations, while nonlinearities are computed on a separate evaluator network that receives
shuffled values in the clear and returns nonlinear functions evaluated at these values back to the MPC network. As a result, each evaluator only gets access to parts of the shuffled data, while the model weights remain private. We evaluate fission on a
wide set of LLMs and compare it against prior works. Fission results in up to eight times faster inference and eight times reduced bandwidth compared to prior works while retaining high accuracy. Finally, we construct an attack on obfuscation techniques
from related works that show significant information leakage, and we demonstrate how Fission enhances privacy.
## 2025/654
* Title: ECDSA Cracking Methods
* Authors: William J Buchanan, Jamie Gilchrist, Keir Finlow-Bates
* [Permalink](
https://eprint.iacr.org/2025/654)
* [Download](
https://eprint.iacr.org/2025/654.pdf)
### Abstract
The ECDSA (Elliptic Curve Digital Signature Algorithm) is used in many blockchain networks for digital signatures. This includes the Bitcoin and the Ethereum blockchains. While it has good performance levels and as strong current security, it should be
handled with care. This care typically relates to the usage of the nonce value which is used to create the signature. This paper outlines the methods that can be used to break ECDSA signatures, including revealed nonces, weak nonce choice, nonce reuse,
two keys and shared nonces, and fault attack.
## 2025/655
* Title: Taking AI-Based Side-Channel Attacks to a New Dimension
* Authors: Lucas David Meier, Felipe Valencia, Cristian-Alexandru Botocan, Damian Vizár
* [Permalink](
https://eprint.iacr.org/2025/655)
* [Download](
https://eprint.iacr.org/2025/655.pdf)
### Abstract
This paper revisits the Hamming Weight (HW) labelling function for machine learning assisted side channel attacks. Contrary to what has been suggested by previous works, our investigation shows that, when paired with modern deep learning architectures,
appropriate pre-processing and normalization techniques; it can perform as well as the popular identity labelling functions and sometimes even beat it. In fact, we hereby introduce a new machine learning method, dubbed, that helps solve the class
imbalance problem associated to HW, while significantly improving the performance of unprofiled attacks. We additionally release our new, easy to use python package that we used in our experiments, implementing a broad variety of machine learning driven
side channel attacks as open source, along with a new dataset AES_nRF, acquired on the nRF52840 SoC.
## 2025/656
* Title: Unbounded Multi-Hop Proxy Re-Encryption with HRA Security: An LWE-Based Optimization
* Authors: Xiaohan Wan, Yang Wang, Haiyang Xue, Mingqiang Wang
* [Permalink](
https://eprint.iacr.org/2025/656)
* [Download](
https://eprint.iacr.org/2025/656.pdf)
### Abstract
Proxy re-encryption (PRE) schemes enable a semi-honest proxy to transform a ciphertext of one user $i$ to another user $j$ while preserving the privacy of the underlying message. Multi-hop PRE schemes allow a legal ciphertext to undergo multiple
transformations, but for lattice-based multi-hop PREs, the number of transformations is typically bounded due to the increase of error terms. Recently, Zhao et al. (Esorics 2024) introduced a lattice-based unbounded multi-hop (homomorphic) PRE scheme
that supports an unbounded number of hops. Nevertheless, their scheme only achieves the selective CPA security. In contrast, Fuchsbauer et al. (PKC 2019) proposed a generic framework for constructing HRA-secure unbounded multi-hop PRE schemes from FHE.
Despite this, when instantiated with state-of-the-art FHEW-like schemes, the overall key size and efficiency remain unsatisfactory.
In this paper, we present a lattice-based unbounded multi-hop PRE scheme with the stronger adaptive HRA security (i.e. security against honest re-encryption attacks), which is more suitable for practical applications. Our scheme features an optimized re-
encryption process based on the FHEW-like blind rotation, which resolves the incompatibility between the noise flooding technique and Fuchsbauer et al. 's framework when instantiated with FHEW-like schemes. This results in reduced storage requirements
for public keys and offers higher efficiency. Moreover, our optimized unbounded multi-hop PRE scheme can be modified to an unbounded homomorphic PRE, a scheme allowing for arbitrary homomorphic computations over fresh, re-encrypted, and evaluated
ciphertexts.
## 2025/657
* Title: Key Derivation Functions Without a Grain of Salt
* Authors: Matilda Backendal, Sebastian Clermont, Marc Fischlin, Felix GĂĽnther * [Permalink](
https://eprint.iacr.org/2025/657)
* [Download](
https://eprint.iacr.org/2025/657.pdf)
### Abstract
Key derivation functions (KDFs) are integral to many cryptographic protocols. Their functionality is to turn raw key material, such as a Diffie-Hellman secret, into a strong cryptographic key that is indistinguishable from random. This guarantee was
formalized by Krawczyk together with the seminal introduction of HKDF (CRYPTO 2010), in a model where the KDF only takes a single key material input. Modern protocol designs, however, regularly need to combine multiple secrets, possibly even from
different sources, with the guarantee that the derived key is secure as long as at least one of the inputs is good. This is particularly relevant in settings like hybrid key exchange for quantum-safe migration. Krawczyk's KDF formalism does not capture
this goal, and there has been surprisingly little work on the security considerations for KDFs since then.
In this work, we thus revisit the syntax and security model for KDFs to treat multiple, possibly correlated inputs. Our syntax is assertive: We do away with salts, which are needed in theory to extract from arbitrary sources in the standard model, but in
practice, they are almost never used (or even available) and sometimes even misused, as we argue. We use our new model to analyze real-world multi-input KDFs—in Signal's X3DH protocol, ETSI's TS 103 744 standard, and MLS' combiner for pre-shared keys—
as well as new constructions we introduce for specialized settings—e.g., a purely blockcipher-based one. We further discuss the importance of collision resistance for KDFs and finally apply our multi-input KDF model to show how hybrid KEM key exchange
can be analyzed from a KDF perspective.
## 2025/658
* Title: Efficient Verifiable Mixnets from Lattices, Revisited
* Authors: Jonathan Bootle, Vadim Lyubashevsky, Antonio Merino-Gallardo
* [Permalink](
https://eprint.iacr.org/2025/658)
* [Download](
https://eprint.iacr.org/2025/658.pdf)
### Abstract
Mixnets are powerful building blocks for providing anonymity
in applications like electronic voting and anonymous messaging. The en- cryption schemes upon which traditional mixnets are built, as well as the zero-knowledge proofs used to provide verifiability, will, however, soon
become insecure once a cryptographically-relevant quantum computer is
built. In this work, we construct the most compact verifiable mixnet that achieves privacy and verifiability through encryption and zero-knowledge
proofs based on the hardness of lattice problems, which are believed to
be quantum-safe.
A core component of verifiable mixnets is a proof of shuffle. The starting point for our construction is the proof of shuffle of Aranha et al. (CT-
RSA 2021). We first identify an issue with the soundness proof in that
work, which is also present in the adaptation of this proof in the mixnets
of Aranha et al. (ACM CCS 2023) and Hough et al. (IACR CiC 2025).
The issue is that one cannot directly adapt classical proofs of shuffle
to the lattice setting due to the splitting structure of the rings used in lattice-based cryptography. This is not just an artifact of the proof, but
a problem that manifests itself in practice, and we successfully mount an attack against the implementation of the first of the mixnets. We fix the problem and introduce a general approach for proving shuffles in split-
ting rings that can be of independent interest.
The efficiency improvement of our mixnet over prior work is achieved by switching from re-encryption mixnets (as in the works of Aranha et al.
and Hough et al.) to decryption mixnets with very efficient layering based
on the hardness of the LWE and LWR problems over polynomial rings.
The ciphertexts in our scheme are smaller by approximately a factor of
10X and 2X over the aforementioned instantiations, while the linear-size zero-knowledge proofs are smaller by a factor of 4X and 2X.
## 2025/659
* Title: Scalable and Fine-Tuned Privacy Pass from Group Verifiable Random Functions
* Authors: Dnnis Faut, Julia Hesse, Lisa Kohl, Andy Rupp
* [Permalink](
https://eprint.iacr.org/2025/659)
* [Download](
https://eprint.iacr.org/2025/659.pdf)
### Abstract
Abstract—Anonymous token schemes are cryptographic
protocols for limiting the access to online resources to
credible users. The resource provider issues a set of access
tokens to the credible user that they can later redeem
anonymously, i.e., without the provider being able to link
their redemptions. When combined with credibility tests such
as CAPTCHAs, anonymous token schemes can significantly
increase user experience and provider security, without
exposing user access patterns to providers.
Current anonymous token schemes such as the Privacy
Pass protocol by Davidson et al. rely on oblivious
pseudorandom functions (OPRFs), which let server and user
jointly compute randomly looking access tokens. For those
protocols, token issuing costs are linear in the number of
requested tokens.
In this work, we propose a new approach for building
anonymous token schemes. Instead of relying on two-party
computation to realize a privacy-preserving pseudorandom
function evaluation, we propose to offload token generation
to the user by using group verifiable random functions
(GVRFs). GVRFs are a new cryptographic primitive
that allow users to produce verifiable pseudorandomness.
Opposed to standard VRFs, verification is anonymous within
the group of credible users. We give a construction of group
VRFs from the Dodis-Yampolskiy VRF and Equivalence-
Class Signatures, based on pairings and a new Diffie-
Hellman inversion assumption that we analyze in the Generic
Group Model. Our construction enjoys compact public keys
and proofs, while evaluation and verification costs are only
slightly increased compared to the Dodis-Yampolskiy VRF.
By deploying a group VRF instead of a OPRF, we
obtain an anonymous token scheme where communication
as well as server-side computation during the issuing phase
is constant and independent of the number of tokens a
user requests. Moreover, by means of our new concept of updatable token policies, the number of unspent tokens in
circulation can retrospectively (i.e., even after the credibility
check) be decreased or increased in order to react to
the current or expected network situation. Our tokens are
further countable and publicly verifiable. This comes at the
cost of higher computational efforts for token redemption
and verification as well as somewhat weaker unlinkability
guarantees compared to Privacy Pass.
## 2025/660
* Title: Eccfrog512ck2: An Enhanced 512-bit Weierstrass Elliptic Curve
* Authors: VĂctor Duarte Melo, William J Buchanan
* [Permalink](
https://eprint.iacr.org/2025/660)
* [Download](
https://eprint.iacr.org/2025/660.pdf)
### Abstract
Whilst many key exchange and digital signature methods use the NIST P256 (secp256r1) and secp256k1 curves, there is often a demand for increased security. With these curves, we have a 128-bit security. These security levels can be increased to 256-bit
security with NIST P-521 Curve 448 and Brainpool-P512. This paper outlines a new curve - Eccfrog512ck2 - and which provides 256-bit security and enhanced performance over NIST P-521. Along with this, it has side-channel resistance and is designed to
avoid weaknesses such as related to the MOV attack. It shows that Eccfrog512ck2 can have a 61.5% speed-up on scalar multiplication and a 33.3% speed-up on point generation over the NIST P-521 curve.
## 2025/661
* Title: An LLM Framework For Cryptography Over Chat Channels
* Authors: Danilo Gligoroski, Mayank Raikwar, Sonu Kumar Jha
* [Permalink](
https://eprint.iacr.org/2025/661)
* [Download](
https://eprint.iacr.org/2025/661.pdf)
### Abstract
Recent advancements in Large Language Models (LLMs) have transformed communication, yet their role in secure messaging remains underexplored, especially in surveillance-heavy environments. At the same time, many governments all over the world are
proposing legislation to detect, backdoor, or even ban encrypted communication. That emphasizes the need for alternative ways to communicate securely and covertly over open channels. We propose a novel cryptographic embedding framework that enables
covert Public Key or Symmetric Key encrypted communication over public chat channels with human-like produced texts. Some unique properties of our framework are: 1. It is LLM agnostic, i.e., it allows participants to use different local LLM models
independently; 2. It is pre- or post-quantum agnostic; 3. It ensures indistinguishability from human-like chat-produced texts. Thus, it offers a viable alternative where traditional encryption is detectable and restricted.
## 2025/662
* Title: Attribute-Based Publicly Verifiable Secret Sharing
* Authors: Liang Zhang, Xingyu Wu, Qiuling Yue, Haibin Kan, Jiheng Zhang
* [Permalink](
https://eprint.iacr.org/2025/662)
* [Download](
https://eprint.iacr.org/2025/662.pdf)
### Abstract
Can a dealer share a secret without knowing the shareholders? We provide a positive answer to this question by introducing the concept of an attribute-based secret sharing (AB-SS) scheme. With AB-SS, a dealer can distribute a secret based on attributes
rather than specific individuals or shareholders. Only authorized users whose attributes satisfy a given access structure can recover the secret. Furthermore, we introduce the concept of attribute-based publicly verifiable secret sharing (AB-PVSS). An AB-
PVSS scheme allows external users to verify the correctness of all broadcast messages from the dealer and shareholders, similar to a traditional PVSS scheme. Additionally, AB-SS (or AB-PVSS) distinguishes itself from traditional SS (or PVSS) by enabling
a dealer to generate shares according to an arbitrary monotone access structure. To construct an AB-PVSS scheme, we first implement a decentralized ciphertext-policy attribute-based encryption (CP-ABE) scheme. The proposed CP-ABE scheme offers a smaller
ciphertext size and requires fewer computational operations, although it is not fully-fledged as a trade-off. We then incorporate non-interactive zero-knowledge (NIZK) proofs to enable public verification of the CP-ABE ciphertext. Based on the CP-ABE and
NIZK proofs, we construct an AB-PVSS primitive. Furthermore, we present an intuitive implementation of optimistic fair exchange based on the AB-PVSS scheme. Finally, we conduct security analysis and comprehensive experiments on the proposed CP-ABE and AB-
PVSS schemes. The results demonstrate that both schemes exhibit plausible performance compared to related works.
## 2025/663
* Title: Intermundium-DL: Assessing the Resilience of Current Schemes to Discrete-Log-Computation Attacks on Public Parameters
* Authors: Mihir Bellare, Doreen Riepel, Laura Shea
* [Permalink](
https://eprint.iacr.org/2025/663)
* [Download](
https://eprint.iacr.org/2025/663.pdf)
### Abstract
We consider adversaries able to perform a nonzero but small number of discrete logarithm computations, as would be expected with near-term quantum computers. Schemes with public parameters consisting of a few group elements are now at risk; could an
adversary knowing the discrete logarithms of these elements go on to easily compromise the security of many users? We study this question for known schemes and find, across them, a perhaps surprising variance in the answers. In a first class are schemes,
including Okamoto and Katz-Wang signatures, that we show fully retain security even when the discrete logs of the group elements in their parameters are known to the adversary. In a second class are schemes like Cramer-Shoup encryption and the SPAKE2
password-authenticated key exchange protocol that we show retain some partial but still meaningful and valuable security. In the last class are schemes we show by attack to totally break. The distinctions uncovered by these results shed light on the
security of classical schemes in a setting of immediate importance, and help make choices moving forward.
## 2025/664
* Title: Publicly Verifiable Generalized Secret Sharing Schemes and Their Applications
* Authors: Liang Zhang, Dongliang Cai, Tao Liu, Haibin Kan, Jiheng Zhang
* [Permalink](
https://eprint.iacr.org/2025/664)
* [Download](
https://eprint.iacr.org/2025/664.pdf)
### Abstract
Generalized secret sharing (GSS), which accommodates monotone access structures, has been under-explored in distributed computing over the past decades. In this paper, we propose the publicly verifiable generalized secret sharing (PVGSS) scheme,
enhancing the applicability of GSS in transparent systems. PVGSS not only enables a dealer to share a secret with fine-grained access structures, but also allows anyone to verify whether the dealer and shareholders are acting honestly or not. We begin by
introducing two approaches to implement GSS schemes: one based on recursive Shamir secret sharing and another utilizing linear secret sharing scheme (LSSS). Then, we present PVGSS constructions by integrating non-interactive zero-knowledge proofs into
the GSS schemes. Further, we prove that the proposed PVGSS schemes achieve IND1-secrecy under DDH assumption. To showcase the practical applicability of PVGSS schemes, we implement a decentralized exchange (DEX) protocol that enables fair atomic swaps of
ERC-20 tokens. A sophisticated access structure is devised to: (1) enable fair atomic swaps during normal protocol execution, and (2) incorporate fault-tolerant passive watchers to provide accountable arbitration when disputes occur. Our benchmarks on
the BN128 curve demonstrate the computational efficiency of PVGSS schemes, while Ethereum gas cost analysis confirms the viability of the DEX implementation.
## 2025/665
* Title: MProve-Nova: A Privacy-Preserving Proof of Reserves Protocol for Monero
* Authors: Varun Thakore, Saravanan Vijayakumaran
* [Permalink](
https://eprint.iacr.org/2025/665)
* [Download](
https://eprint.iacr.org/2025/665.pdf)
### Abstract
A proof of reserves (PoR) protocol enables a cryptocurrency exchange to prove to its users that it owns a certain amount of coins, as a first step towards proving that it is solvent. We present the design, implementation, and security analysis of MProve-
Nova, a PoR protocol for Monero that leverages the Nova recursive SNARK to achieve two firsts (without requiring any trusted setup). It is the first Monero PoR protocol that reveals only the number of outputs owned by an exchange; no other information
about the outputs or their key images is revealed. It is also the first Monero PoR protocol where the proof size and proof verification time are constant, i.e. they are independent of the number of outputs on the Monero blockchain and the number of
outputs owned by the exchange. To achieve constant verification times, MProve-Nova requires a pre-processing step which creates two Merkle trees from all the outputs and key images on the Monero blockchain.
MProve-Nova consists of two Nova-based subprotocols, a reserves commitment generator (RCG) protocol used to compute a commitment to the total reserves owned by an exchange and a non-collusion (NC) protocol used to prove non-collusion between two
exchanges. For the RCG protocol, we observed proof sizes of about 28 KB and verification times of 4.3 seconds. For the NC protocol, we observed proof sizes of about 24 KB and verification times of 0.2 seconds. Proving times for both protocols increase
linearly with the number of outputs owned by the exchange but remain independent of the number of outputs on the Monero blockchain. On average, the RCG protocol required about 42 minutes per 1000 outputs and the NC protocol required about 5 minutes per
1000 outputs.
## 2025/666
* Title: Adaptive Robustness of Hypergrid Johnson-Lindenstrauss
* Authors: Andrej Bogdanov, Alon Rosen, Neekon Vafa, Vinod Vaikuntanathan
* [Permalink](
https://eprint.iacr.org/2025/666)
* [Download](
https://eprint.iacr.org/2025/666.pdf)
### Abstract
Johnson and Lindenstrauss (Contemporary Mathematics, 1984) showed that for $n > m$, a scaled random projection $\mathbf{A}$ from $\mathbb{R}^n$ to $\mathbb{R}^m$ is an approximate isometry on any set $S$ of size at most exponential in $m$. If $S$ is
larger, however, its points can contract arbitrarily under $\mathbf{A}$. In particular, the hypergrid $([-B, B] \cap \mathbb{Z})^n$ is expected to contain a point that is contracted by a factor of $\kappa_{\mathsf{stat}} = \Theta(B)^{-1/\alpha}$, where $\
alpha = m/n$.
We give evidence that finding such a point exhibits a statistical-computational gap precisely up to $\kappa_{\mathsf{comp}} = \widetilde{\Theta}(\sqrt{\alpha}/B)$. On the algorithmic side, we design an online algorithm achieving $\kappa_{\mathsf{comp}}$,
inspired by a discrepancy minimization algorithm of Bansal and Spencer (Random Structures \& Algorithms, 2020). On the hardness side, we show evidence via a multiple overlap gap property (mOGP), which in particular captures online algorithms; and a
reduction-based lower bound, which shows hardness under standard worst-case lattice assumptions.
As a cryptographic application, we show that the rounded Johnson-Lindenstrauss embedding is a robust property-preserving hash function (Boyle, Lavigne and Vaikuntanathan, TCC 2019) on the hypergrid for the Euclidean metric in the computationally hard
regime. Such hash functions compress data while preserving $\ell_2$ distances between inputs up to some distortion factor, with the guarantee that even knowing the hash function, no computationally bounded adversary can find any pair of points that
violates the distortion bound.
## 2025/667
* Title: Vector Commitment Design, Analysis, and Applications: A Survey
* Authors: Vir Pathak, Sushmita Ruj, Ron van der Meyden
* [Permalink](
https://eprint.iacr.org/2025/667)
* [Download](
https://eprint.iacr.org/2025/667.pdf)
### Abstract
Due to their widespread applications in decentralized and privacy preserving technologies, commitment schemes have become increasingly important cryptographic primitives. With a wide variety of applications, many new constructions have been proposed,
each enjoying different features and security guarantees. In this paper, we systematize the designs, features, properties, and applications of vector commitments (VCs). We define vector, polynomial, and functional commitments and we discuss the
relationships shared between these types of commitment schemes. We first provide an overview of the definitions of the commitment schemes we will consider, as well as their security notions and various properties they can have. We proceed to compare
popular constructions, taking into account the properties each one enjoys, their proof/update information sizes, and their proof/commitment complexities. We also consider their effectiveness in various decentralized and privacy preserving applications.
Finally, we conclude by discussing some potential directions for future work.
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)