## In this issue
1. [2024/762] Constant-Cost Batched Partial Decryption in ...
2. [2024/766] Breaking Verifiable Delay Functions in the Random ...
3. [2024/1530] Folding Schemes with Privacy Preserving Selective ...
4. [2024/1531] FLI: Folding Lookup Instances
5. [2024/1532] Bitwise Garbling Schemes --- A Model with ...
6. [2024/1533] BEAT-MEV: Epochless Approach to Batched Threshold ...
7. [2024/1534] More Efficient Lattice-based OLE from Circuit- ...
8. [2024/1535] Relaxed Lattice-Based Programmable Hash Functions: ...
9. [2024/1536] Cryptographic Characterization of Quantum Advantage
10. [2024/1537] VOLE-in-the-head signatures from Subfield Bilinear ...
11. [2024/1538] Security Perceptions of Users in Stablecoins: ...
12. [2024/1539] Quantum Cryptography from Meta-Complexity
13. [2024/1540] Formal Security Analysis of the OpenID FAPI 2.0 ...
14. [2024/1541] Findex: A Concurrent and Database-Independent ...
15. [2024/1542] Robust AE With Committing Security
16. [2024/1543] HEonGPU: a GPU-based Fully Homomorphic Encryption ...
17. [2024/1544] PoUDR: Proof of Unified Data Retrieval in ...
18. [2024/1545] Fully Composable Homomorphic Encryption
19. [2024/1546] Bit t-SNI Secure Multiplication Gadget for Inner ...
20. [2024/1547] HHL for tensor-decomposable matrices
21. [2024/1548] Fully-Succinct Arguments over the Integers from ...
22. [2024/1549] Universally Composable SNARKs with Transparent ...
23. [2024/1550] MAYO Key Recovery by Fixing Vinegar Seeds
24. [2024/1551] SNARKs for Virtual Machines are Non-Malleable
25. [2024/1552] Revisiting Keyed-Verification Anonymous Credentials
26. [2024/1553] STARK-based Signatures from the RPO Permutation
27. [2024/1554] Breaking, Repairing and Enhancing XCBv2 into the ...
28. [2024/1555] Private Laconic Oblivious Transfer with Preprocessing
29. [2024/1556] The module action for isogeny based cryptography
30. [2024/1557] Tightly Secure Threshold Signatures over Pairing- ...
31. [2024/1558] Understanding Leakage in Searchable Encryption: a ...
32. [2024/1559] Mind the Composition of Toffoli Gates: Structural ...
33. [2024/1560] Revisiting Shuffle-Based Private Set Unions with ...
34. [2024/1561] FLUENT: A Tool for Efficient Mixed-Protocol Semi- ...
35. [2024/1562] Fully Privacy-preserving Billing Models for Peer- ...
36. [2024/1563] Optimized One-Dimensional SQIsign Verification on ...
37. [2024/1564] A Simple Framework for Secure Key Leasing
38. [2024/1565] Fiat-Shamir in the Wild
39. [2024/1566] Dynamic zk-SNARKs
40. [2024/1567] A New World in the Depths of Microcrypt: Separating ...
41. [2024/1568] Oracle Separation Between Quantum Commitments and ...
42. [2024/1569] The Supersingular Isogeny Path and Endomorphism ...
## 2024/762
* Title: Constant-Cost Batched Partial Decryption in Threshold Encryption
* Authors: Sora Suegami, Shinsaku Ashizawa, Kyohei Shibano
* [Permalink](
https://eprint.iacr.org/2024/762)
* [Download](
https://eprint.iacr.org/2024/762.pdf)
### Abstract
Threshold public key encryption schemes distribute secret keys among multiple parties, known as the committee, to reduce reliance on a single trusted entity.
However, existing schemes face inefficiencies as the committee should perform computation and communication for decryption of each individual ciphertext.
As the number of ciphertexts being decrypted per unit of time increases, this can limit the number of committee parties and their decentralization due to increased hardware requirements, heightening the risk of adversarial collusion.
To address this, we introduce tag-based batched threshold encryption (TBTE), which ensures constant computational and communication costs per committee member, independent of the number of ciphertexts being decrypted in batch under distinct decryption
policies.
The TBTE scheme is constructed over bilinear groups in the random oracle model and secure in the algebraic group model, assuming the hardness of the $(q_1,q_2)$-discrete logarithm problem and the EAV-security of the symmetric-key encryption scheme.
Evaluation of our implementation demonstrates constant data size, specifically 48 bytes received and 56 bytes sent, and constant execution time for each committee party during decryption, even for various batch sizes up to $2^{20}$.
## 2024/766
* Title: Breaking Verifiable Delay Functions in the Random Oracle Model
* Authors: Ziyi Guan, Artur Riazanov, Weiqiang Yuan
* [Permalink](
https://eprint.iacr.org/2024/766)
* [Download](
https://eprint.iacr.org/2024/766.pdf)
### Abstract
A verifiable delay function (VDF) is a cryptographic primitive that requires a long time to compute (even with parallelization), but produces a unique output that is efficiently and publicly verifiable.
We prove that VDFs with \emph{imperfect completeness} and \emph{computational uniqueness} do not exist in the random oracle model. This also rules out black-box constructions of VDFs from other cryptographic primitives, such as one-way permutations and
collision-resistant hash functions.
Prior to our work, Mahmoody, Smith and Wu (ICALP 2020) prove that VDFs satisfying both \emph{perfect completeness} and \emph{perfect uniqueness} do not exist in the random oracle model; on the other hand, Ephraim, Freitag, Komargodski, and Pass (
Eurocrypt 2020) construct VDFs with \emph{perfect completeness} and \emph{computational uniqueness} in the random oracle model assuming the hardness of repeated squaring. Our result is optimal -- we bridge the current gap between previously known
impossibility results and existing constructions.
## 2024/1530
* Title: Folding Schemes with Privacy Preserving Selective Verification
* Authors: Joan Boyar, Simon Erfurth
* [Permalink](
https://eprint.iacr.org/2024/1530)
* [Download](
https://eprint.iacr.org/2024/1530.pdf)
### Abstract
Folding schemes are an exciting new primitive, transforming the task of performing multiple zero-knowledge proofs of knowledge for a relation into performing just one zero-knowledge proof, for the same relation, and a number of cheap inclusion-proofs.
Recently, folding schemes have been used to amortize the cost associated with proving different statements to multiple distinct verifiers, which has various applications. We observe that for these uses, leaking information about the statements folded
together can be problematic, yet this happens with previous constructions. Towards resolving this issue, we give a natural definition of privacy preserving folding schemes, and what security they should offer. To construct privacy preserving folding
schemes, we first define a statement hiders, a primitive which might be of independent interest. In a nutshell, a statement hider hides an instance of a relation as a new instance in the same relation. The new instance is in the relation if and only if
the initial instance is. With this building block, we can utilize existing folding schemes to construct a privacy preserving folding scheme, by first hiding each of the statements. Folding schemes allow verifying that a statement was folded into another
statement, while statement hiders allow verifying that a statement was hidden as another statement.
## 2024/1531
* Title: FLI: Folding Lookup Instances
* Authors: Albert Garreta, Ignacio Manzur
* [Permalink](
https://eprint.iacr.org/2024/1531)
* [Download](
https://eprint.iacr.org/2024/1531.pdf)
### Abstract
We introduce two folding schemes for lookup instances: FLI and FLI+SOS. Both use a PIOP to check that a matrix has elementary basis vectors as rows, with FLI+SOS adding a twist based on Lasso’s SOS-decomposability.
FLI takes two lookup instances $\{\mathbf{a}_1\}, \{\mathbf{a}_2\}\subseteq\mathbf{t}$, and expresses them as matrix equations $M_i\cdot\mathbf{t}^\mathsf{T}=\mathbf{a}_i^\mathsf{T}$ for $i=1,2$, where each matrix $M_i\in\mathbb{F}^{m\times N}$ has rows
which are elementary basis vectors in $\mathbb{F}^N$. Matrices that satisfy this condition are said to be in $R_{\mathsf{elem}}$. Then, a folding scheme for $R_{\mathsf{elem}}$ into a relaxed relation is used, which combines the matrices $M_1, M_2$ as $
M_1+\alpha M_2$ for a random $\alpha\in\mathbb{F}$. Finally, the lookup equations are combined as $(M_1+\alpha M_2)\cdot \mathbf{t}^{\mathsf{T}} = (\mathbf{a}_1+\alpha \mathbf{a}_2)^\mathsf{T}$. In FLI, only the property that a matrix is in $R_{\mathsf{
elem}}$ is folded, and this makes the FLI folding step the cheapest among existing solutions. The price to pay is in the cost for proving accumulated instances.
FLI+SOS builds upon FLI to enable folding of large SOS-decomposable tables. This is achieved through a variation of Lasso's approach to SOS-decomposability, which fits FLI naturally. For comparison, we describe (for the first time to our knowledge)
straightforward variations of Protostar and Proofs for Deep Thought that also benefit from SOS-decomposability. We see that for many reasonable parameter choices, and especially those arising from lookup-based zkVMs, FLI+SOS can concretely be the
cheapest folding solution.
## 2024/1532
* Title: Bitwise Garbling Schemes --- A Model with $\frac{3}{2}\kappa$-bit Lower Bound of Ciphertexts
* Authors: Fei Xu, Honggang Hu, Changhong Xu
* [Permalink](
https://eprint.iacr.org/2024/1532)
* [Download](
https://eprint.iacr.org/2024/1532.pdf)
### Abstract
At Eurocrypt 2015, Zahur, Rosulek, and Evans proposed the model of Linear Garbling Schemes. This model proved a $2\kappa$-bit lower bound of ciphertexts for a broad class of garbling schemes. Since then, several methods have been developed that bypass
this lower bound, albeit with a notable limitation: Their reliance on specifically correlated input wire labels restricts their applicability across most gates. At Crypto 2021, Rosulek and Roy presented the innovative "three-halves" garbling scheme in
which AND gates cost $1.5\kappa+5$ bits and XOR gates are free. A noteworthy aspect of their scheme is the slicing-and-dicing technique, which is applicable universally to all AND gates when garbling a boolean circuit. Following this revelation, Rosulek
and Roy presented several open problems. Our research primarily addresses one of them: ``Is $1.5\kappa$ bits optimal for garbled AND gates in a more inclusive model than Linear Garbling Schemes?''
In this paper, we propose the Bitwise Garbling Schemes, a model that seamlessly incorporates the slicing-and-dicing technique. Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model. Since Rosulek and
Roy also suggested another problem which questions the necessity of free-XOR, we explore constructions without free-XOR and prove a $2\kappa$-bit lower bound. Therefore, sacrificing compatibility with free-XOR does not lead to a more efficient scheme.
## 2024/1533
* Title: BEAT-MEV: Epochless Approach to Batched Threshold Encryption for MEV Prevention
* Authors: Jan Bormet, Sebastian Faust, Hussien Othman, Ziyan Qu
* [Permalink](
https://eprint.iacr.org/2024/1533)
* [Download](
https://eprint.iacr.org/2024/1533.pdf)
### Abstract
In decentralized finance (DeFi), the public availability of pending transactions presents significant privacy concerns, enabling market manipulation through miner extractable value (MEV). MEV occurs when block proposers exploit the ability to reorder,
omit, or include transactions, causing financial loss to users from frontrunning. Recent research has focused on encrypting pending transactions, hiding transaction data until block finalization. To this end, Choudhuri et al. (USENIX '24) introduce an
elegant new primitive called Batched Threshold Encryption (BTE) where a batch of encrypted transactions is selected by a committee and only decrypted after block finalization. Crucially, BTE achieves low communication complexity during decryption and
guarantees that all encrypted transactions outside the batch remain private. An important shortcoming of their construction is, however, that it progresses in epochs and requires a costly setup in MPC for each batch decryption.
In this work, we introduce a novel BTE scheme addressing the limitations by eliminating the need for an expensive epoch setup while achieving practical encryption and decryption times. Additionally, we explore the problem of how users can coordinate
their transactions, which is crucial for the functionality of the system. Along the way, we present several optimizations and trade-offs between communication and computational complexity that allow us to achieve practical performance on standard
hardware ($<2$ ms for encryption and $<440$ ms for decrypting $512$ transactions). Finally, we prove our constructions secure in a model that captures practical attacks on MEV-prevention mechanisms.
## 2024/1534
* Title: More Efficient Lattice-based OLE from Circuit-private Linear HE with Polynomial Overhead
* Authors: Leo de Castro, Duhyeong Kim, Miran Kim, Keewoo Lee, Seonhong Min, Yongsoo Song
* [Permalink](
https://eprint.iacr.org/2024/1534)
* [Download](
https://eprint.iacr.org/2024/1534.pdf)
### Abstract
We present a new and efficient method to obtain circuit privacy for lattice-based linearly homomorphic encryptions (LHE). In particular, our method does not involve noise-flooding with exponetially large errors or iterative bootstrapping. As a direct
result, we obtain a semi-honest oblivious linear evaluation (OLE) protocol with the same efficiency, reducing the communication cost of the prior state of the art by 50%.
Consequently, the amortized time of our protocol improves the prior work by 33% under 100Mbps network setting. Our semi-honest OLE is the first to achieve both concrete efficiency and asymptotic quasi-optimality. Together with an extension of the recent
zero-knowledge proof of plaintext knowledge, our LHE yields actively-secure OLE with 2.7x reduced communication from the prior work. When applied to Overdrive (Eurocrypt '18), an MPC preprocessing protocol, our method provides 1.4x improvement in
communication over the state of the art.
## 2024/1535
* Title: Relaxed Lattice-Based Programmable Hash Functions: New Efficient Adaptively Secure IBEs
* Authors: Xingye Lu, Jingjing Fan, Man Ho AU
* [Permalink](
https://eprint.iacr.org/2024/1535)
* [Download](
https://eprint.iacr.org/2024/1535.pdf)
### Abstract
In this paper, we introduce the notion of relaxed lattice-based programmable hash function (RPHF), which is a novel variant of lattice-based programmable hash functions (PHFs). Lattice-based PHFs, together with preimage trapdoor functions (TDFs), have
been widely utilized (implicitly or explicitly) in the construction of adaptively secure identity-based encryption (IBE) schemes. The preimage length and the output length of the underlying PHF and TDF together determine the user secret key and
ciphertext lengths of the IBE schemes.
However, the current lattice-based PHF definition imposes the restriction that the preimage length of TDF in the IBE schemes cannot be too short, hindering the utilization of size-efficient NTRU TDF. To overcome this hurdle, RPHF relaxes the hash key
distribution requirement in the definition of PHF from statistical indistinguishability to computational indistinguishability. This relaxation eliminates limitations on the preimage length of underlying TDFs in IBE, enabling the construction of IBEs from
NTRU TDFs.
We introduce two instantiations of RPHF: the first produces a hash output length of $2$ ring elements, with a hash key size linear to the input length, and the second yields an output length of $14$ ring elements, with a hash key size proportional to the
square root of the input length.
Building upon these RPHF instantiations, we propose two adaptively secure lattice-based IBE schemes with ciphertext lengths of $5$ and $17$ ring elements and user secret key lengths of $4$ and $16$ ring elements, respectively.
The length of the IBE master public key is roughly equivalent to the size of the hash key of the underlying RPHF.
In comparison to existing IBE constructions, our proposed schemes achieve a significant reduction (over an order of magnitude) in ciphertext and secret key sizes. Notably, state-of-the-art constructions from ideal lattices exhibit secret key and
ciphertext sizes over 100 ring elements, making our proposed schemes highly efficient. Moreover, the master public key sizes of our IBEs remain comparable.
## 2024/1536
* Title: Cryptographic Characterization of Quantum Advantage
* Authors: Tomoyuki Morimae, Yuki Shirakawa, Takashi Yamakawa
* [Permalink](
https://eprint.iacr.org/2024/1536)
* [Download](
https://eprint.iacr.org/2024/1536.pdf)
### Abstract
Quantum computational advantage refers to an existence of computational tasks that are easy for quantum computing but hard for classical one. Unconditionally showing quantum advantage is beyond our current understanding of complexity theory, and
therefore some computational assumptions are needed. Which complexity assumption is necessary and sufficient for quantum advantage? In this paper, we show that inefficient-verifier proofs of quantumness (IV-PoQ) exist if and only if classically-secure
one-way puzzles (OWPuzzs) exist. As far as we know, this is the first time that a complete cryptographic characterization of quantum advantage is obtained. IV-PoQ capture various types of quantum advantage previously studied, such as sampling advantage
and searching advantage. Previous work [Morimae and Yamakawa 2024] showed that IV-PoQ can be constructed from OWFs, but a construction of IV-PoQ from weaker assumptions was left open. Our result solves the open problem. OWPuzzs are one of the most
fundamental quantum cryptographic primitives implied by many quantum cryptographic primitives weaker than one-way functions (OWFs). The equivalence between IV-PoQ and classically-secure OWPuzzs therefore highlights that if there is no quantum advantage,
then these fundamental primitives do not exist. The equivalence also means that quantum advantage is an example of the applications of OWPuzzs. Except for commitments, no application of OWPuzzs was known before. Our result shows that quantum advantage is
another application of OWPuzzs. Moreover, it is the first quantum computation classical communication (QCCC) application of OWPuzzs.
## 2024/1537
* Title: VOLE-in-the-head signatures from Subfield Bilinear Collisions
* Authors: Janik Huth, Antoine Joux
* [Permalink](
https://eprint.iacr.org/2024/1537)
* [Download](
https://eprint.iacr.org/2024/1537.pdf)
### Abstract
In this paper, we introduce a new method to construct a signature scheme based on the subfield bilinear collision problem published at Crypto 2024. We use techniques based on vector oblivious linear evaluation (VOLE) to significantly improve the running
time and signature size of the scheme compared to the MPC-in-the-head version.
## 2024/1538
* Title: Security Perceptions of Users in Stablecoins: Advantages and Risks within the Cryptocurrency Ecosystem
* Authors: Maggie Yongqi Guan, Yaman Yu, Tanusree Sharma, Molly Zhuangtong Huang, Kaihua Qin, Yang Wang, Kanye Ye Wang
* [Permalink](
https://eprint.iacr.org/2024/1538)
* [Download](
https://eprint.iacr.org/2024/1538.pdf)
### Abstract
Stablecoins, a type of cryptocurrency pegged to another asset to maintain a stable price, have become an important part of the cryptocurrency ecosystem. Prior studies have primarily focused on examining the security of stablecoins from technical and
theoretical perspectives, with limited investigation into users’ risk perceptions and security behaviors in stablecoin practices. To address this research gap, we conducted a mixed-method study that included constructing a stablecoin interaction
framework based on the literature, which informed the design of our interview protocol, semi-structured interviews (n=21), and Reddit data analysis (9,326 posts). We found that participants see stable value and regulatory compliance as key security
advantages of stablecoins over other cryptocurrencies. However, participants also raised concerns about centralization risks in fiat-backed stablecoins, perceived challenges in crypto-backed stablecoins due to limited reliance on fully automated
execution, and confusion regarding the complex mechanisms of algorithmic stablecoins. We proposed improving user education and optimizing mechanisms to address these concerns and promote the safer use of stablecoins.
## 2024/1539
* Title: Quantum Cryptography from Meta-Complexity
* Authors: Taiga Hiroka, Tomoyuki Morimae
* [Permalink](
https://eprint.iacr.org/2024/1539)
* [Download](
https://eprint.iacr.org/2024/1539.pdf)
### Abstract
In classical cryptography, one-way functions (OWFs) are the minimal assumption, while recent active studies have demonstrated that OWFs are not necessarily the minimum assumption in quantum cryptography. Several new primitives have been introduced such
as pseudorandom unitaries (PRUs), pseudorandom function-like state generators (PRFSGs), pseudorandom state generators (PRSGs), one-way state generators (OWSGs), one-way puzzles (OWPuzzs), and EFI pairs. They are believed to be weaker than OWFs, but they
still imply many useful applications such as private-key quantum money schemes, secret-key encryption, message authentication codes, digital signatures, commitments, and multiparty computations. Now that the possibility of quantum cryptography without
OWFs has opened up, the most important goal in the field is to provide them with concrete instantiations. For example, in classical cryptography, there are many instantiations of OWFs based on concrete hardness assumptions, such as the hardness of
discrete logarithms or learning with errors. The study of generic primitives is justified by the existence of concrete instantiations. On the other hand, in quantum cryptography, all known constructions of those primitives are only from OWFs. We
therefore have the following important open problem: Do they have instantiations based on some concrete hardness assumptions that will not imply OWFs? Ideally, the assumptions should be the ones that are studied in other contexts than cryptography. In
this paper, we give a candidate answer to the question by showing that quantum-average-hardness of GapK problem implies the existence of OWPuzzs. A GapK problem is a promise problem to decide whether a given bit string has a small Kolmogorov complexity
or not. Its quantum-average-hardness means that an instance is sampled from a quantum-polynomial-time-samplable distribution, and no quantum-polynomial-time algorithm can solve the problem with high probability. As far as we know, this is the first time
that a ``Microcrypt'' primitive is constructed based on concrete hardness assumptions that do not seem to imply OWFs. Moreover, the assumptions are studied in other contexts than cryptography, especially in the field of meta-complexity. (Note: During
the preparation of this manuscript, Khurana and Tomer \cite{cryptoeprint:2024/1490} uploaded a concurrent work.)
## 2024/1540
* Title: Formal Security Analysis of the OpenID FAPI 2.0 Family of Protocols: Accompanying a Standardization Process
* Authors: Pedram Hosseyni, Ralf Küsters, Tim Würtele
* [Permalink](
https://eprint.iacr.org/2024/1540)
* [Download](
https://eprint.iacr.org/2024/1540.pdf)
### Abstract
FAPI 2.0 is a suite of Web protocols developed by the OpenID Foundation's FAPI Working Group (FAPI WG) for third-party data sharing and digital identity in high-risk environments. Even though the specifications are not completely finished, several
important entities have started to adopt the FAPI 2.0 protocols, including Norway's national HelseID, Australia's Consumer Data Standards, as well as private companies like Authlete and Australia-based connectID; the predecessor FAPI 1.0 is in widespread
use with millions of users.
The FAPI WG asked us to accompany the standardization of the FAPI 2.0 protocols with a formal security analysis to proactively identify vulnerabilities before widespread deployment and to provide formal security guarantees for the standards. In this
paper, we report on our analysis and findings.
Our analysis is based on a detailed model of the Web infrastructure, the so-called Web Infrastructure Model (WIM), which we extend to be able to carry out our analysis of the FAPI 2.0 protocols including important extensions like FAPI-CIBA. Based on the (
extended) WIM and formalizations of the security goals and attacker model laid out in the FAPI 2.0 specifications, we provide a formal model of the protocols and carry out a formal security analysis, revealing several attacks. We have worked with the
FAPI WG to fix the protocols, resulting in several amendments to the specifications. With these changes in place, we have adjusted our protocol model and formally proved that the security properties hold true under the strong attacker model defined by
the FAPI WG.
## 2024/1541
* Title: Findex: A Concurrent and Database-Independent Searchable Encryption Scheme
* Authors: Théophile Brézot, Chloé Hébant
* [Permalink](
https://eprint.iacr.org/2024/1541)
* [Download](
https://eprint.iacr.org/2024/1541.pdf)
### Abstract
State-of-the-art database implementations offer a wide range of functionalities and impressive performances while supporting highly concurrent loads. However they all rely on the server knowing the content of the database, which raises issues when
sensitive information is being stored on a server that cannot be trusted. Encrypting documents before sending them to a remote server solves the confidentiality issue at the cost of loosing the keyword search functionality. Cryptographic primitives such
as Symmetric Searchable Encryption (SSE) schemes have been proposed to recover this functionality. However, no SSE construction properly defines correctness and successfully guarantees security in a concurrent setting.
This paper attempts a first step in this direction by recommending linearizability as the standard notion of correctness for a concurrent SSE. We study the impact of concurrency on security and stress the need for finer-grained security models. Hence, we
propose the log-security model that guarantees security against an adversary having access to persistency-related logs, fixing a blind spot in the snapshot model while capturing security in a concurrent setting.
We also build the first concurrent SSE solution proven correct and secure in a concurrent setting, that can be implemented on top of any database. Our scheme proved to be fast thanks to optimal wait-free search operations and sequentially-optimal, lock-
free modifications, that both execute under one micro-second per binding, while only incurring a 13.3% storage expansion.
## 2024/1542
* Title: Robust AE With Committing Security
* Authors: Viet Tung Hoang, Sanketh Menda
* [Permalink](
https://eprint.iacr.org/2024/1542)
* [Download](
https://eprint.iacr.org/2024/1542.pdf)
### Abstract
There has been a recent interest to develop and standardize Robust Authenticated Encryption (Robust AE) schemes. NIST, for example, is considering an Accordion mode (a wideblock tweakable blockcipher), with Robust AE as a primary application. On the
other hand, recent attacks and applications suggest that encryption needs to be committing. Indeed, committing security isalso a design consideration in the Accordion mode. Yet it is unclear how to build a Robust AE with committing security.
In this work, we give a modular solution for this problem. We first show how to transform any wideblock tweakable blockcipher TE to a Robust AE scheme SE that commits just the key. The overhead is cheap, just a few finite-field multiplications and
blockcipher calls. If one wants to commit the entire encryption context, one can simply hash the context to derive a 256-bit subkey, and uses SE on that subkey. The use of 256-bit key on SE only means that it has to rely on AES-256 but doesn't require TE
to have 256-bit key.
Our approach frees the Accordion designs from consideration of committing security. Moreover, it gives a big saving for several key-committing applications that don't want to pay the inherent hashing cost of full committing.
## 2024/1543
* Title: HEonGPU: a GPU-based Fully Homomorphic Encryption Library 1.0
* Authors: Ali Şah Özcan, Erkay Savaş
* [Permalink](
https://eprint.iacr.org/2024/1543)
* [Download](
https://eprint.iacr.org/2024/1543.pdf)
### Abstract
HEonGPU is a high-performance library designed to optimize Fully Homomorphic Encryption (FHE) operations on Graphics Processing Unit (GPU). By leveraging the parallel processing capac- ity of GPUs, HEonGPU significantly reduces the computational overhead
typically associated with FHE by executing complex operation concurrently. This allows for faster execution of homomorphic computations on encrypted data, enabling real-time applications in privacy-preserving machine learn- ing and secure data processing.
A key advantage of HEonGPU lies in its multi-stream architecture, which not only allows parallel processing of tasks to improve throughput but also eliminates the over- head of data transfers between the host device (i.e., CPU) and GPU. By efficiently
managing data within the GPU using multi-streams, HEonGPU minimizes the need for repeated memory transfers, further enhancing performance. HEonGPU’s GPU-optimized design makes it ideal for large-scale encrypted computations, providing users with
reduced latency and higher performance across various FHE schemes.
## 2024/1544
* Title: PoUDR: Proof of Unified Data Retrieval in Decentralized Storage Networks
* Authors: Zonglun Li, Shuhao Zheng, Junliang Luo, Ziyue Xin, Dun Yuan, Shang Gao, Sichao Yang, Bin Xiao, Xue Liu
* [Permalink](
https://eprint.iacr.org/2024/1544)
* [Download](
https://eprint.iacr.org/2024/1544.pdf)
### Abstract
Decentralized storage networks, including IPFS and Filecoin, have created a marketplace where individuals exchange storage space for profit. These networks employ protocols that reliably ensure data storage providers accurately store data without
alterations, safeguarding the interests of storage purchasers. However, these protocols lack an effective and equitable payment mechanism for data retrieval, particularly when multiple data queriers are involved. This necessitates a protocol that ensures
both data integrity and fair compensation for data providers.
In decentralized storage, data is fragmented into small blocks and stored across multiple nodes, a process known as data swarming. Due to this property, traditional data exchange protocols are inadequate in terms of communication and economic efficiency.
We propose the Proof of Unified Data Retrieval protocol (PoUDR). PoUDR incorporates ZK-SNARK to facilitate a fair data exchange protocol. PoUDR reduces the number of blockchain transactions for both single block and data swarming retrieval. The protocol
requires only a single key-revealing transaction submitted by the provider to the blockchain for each data block. This architecture allows for further optimization of transaction numbers through a batched proof technique on the provider's side. This
approach necessitates only $N_P$ transactions within a specific time frame when data consisting of $N_D$ blocks, provided by $N_P$ providers, is queried by $N_Q$ queriers.
[continued in next message]
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)