[continued from previous message]
* Title: Marian: An Open Source RISC-V Processor with Zvk Vector Cryptography Extensions
* Authors: Thomas Szymkowiak, Endrit Isufi, Markku-Juhani Saarinen
* [Permalink](
https://eprint.iacr.org/2024/1449)
* [Download](
https://eprint.iacr.org/2024/1449.pdf)
### Abstract
The RISC-V Vector Cryptography Extensions (Zvk) were ratified in 2023 and integrated into the main ISA manuals in 2024. These extensions support high-speed symmetric cryptography (AES, SHA2, SM3, SM4) operating on the vector register file and offer
significant performance improvements over scalar cryptography extensions (Zk) due to data parallelism. As a ratified extension, Zvk is supported by compiler toolchains and is already being integrated into popular cryptographic middleware such as OpenSSL.
We report on Marian, the first open-source hardware implementation of a vector processor with the Zvk extensions. The design is based on the PULP ``Ara'' vector unit, which itself is an extension of the popular CVA6 processor. The implementation is in
SystemVerilog and has been tested using Virtex Ultrascale+ FPGA prototyping, with a planned tapeout targeting a 22nm process node. We offer an analysis of the architectural requirements that vector cryptography imposes on a processor, as well as the
initial estimates of performance and area for our implementation.
## 2024/1450
* Title: TentLogiX: 5-bit Chaos-Driven S-Boxes for Lightweight Cryptographic Systems
* Authors: Maha Allouzi, Arefeh Rahaei
* [Permalink](
https://eprint.iacr.org/2024/1450)
* [Download](
https://eprint.iacr.org/2024/1450.pdf)
### Abstract
Cryptography is a crucial method for ensuring the security of communication and data transfers across networks. While it excels on devices with abundant resources, such as PCs, servers, and smartphones, it may encounter challenges when applied to
resource-constrained Internet of Things (IoT) devices like Radio Frequency Identification (RFID) tags and sensors. To address this issue, a demand arises for a lightweight variant of cryptography known as lightweight cryptography (LWC).
In developing any cryptographic algorithm, the substitution box (S-box) is a fundamental component, providing nonlinear functionality between inputs and outputs. Researchers have TentLogiX diverse S-box designs tailored for various applications, but only
a few manage to balance the trade-offs among cost, performance, and security, particularly in the context of resource-constrained IoT devices.
This paper delves into the realm of S-boxes employed in popular LWC algorithms, categorizing them by their input–output bit sizes and elucidating their strengths and limitations. The focus then shifts to a novel 5-bit S-box design, utilizing chaotic
mapping theory to introduce a randomized behavior.
Subsequently, the paper proposed TentLogiX a 5-bit S-box, constructed based on compound chaotic system, tent-logistic systems, which has better chaotic performance and wider sequences and explores its security robustness through various cryptanalysis
techniques, including bijective analysis, nonlinearity assessment, linearity evaluation, and differential cryptanalysis. The paper concludes by presenting a thorough comparison that underscores the superiority of the TentLogiX 5-bit S-box over its 5-bit
counterparts.
## 2024/1451
* Title: Traffic-aware Merkle Trees for Shortening Blockchain Transaction Proofs
* Authors: Avi Mizrahi, Noam Koren, Ori Rottenstreich, Yuval Cassuto
* [Permalink](
https://eprint.iacr.org/2024/1451)
* [Download](
https://eprint.iacr.org/2024/1451.pdf)
### Abstract
Merkle trees play a crucial role in blockchain networks in organizing network state. They allow proving a particular value of an entry in the state to a node that maintains only the root of the Merkle trees, a hash-based signature computed over the data
in a hierarchical manner. Verification of particular state entries is crucial in reaching a consensus on the execution of a block where state information is required in the processing of its transactions. For instance, a payment transaction should be
based on the balance of the two involved accounts. The proof length affects the network communication and is typically logarithmic in the state size. In this paper, we take advantage of typical transaction characteristics for better organizing Merkle
trees to improve blockchain network performance. We focus on the common transaction processing where Merkle proofs are jointly provided for multiple accounts. We first provide lower bounds for the communication cost that are based on the distribution of
accounts involved in the transactions. We then describe algorithms that consider traffic patterns for significantly reducing it. The algorithms are inspired by various coding methods such as Huffman coding, partition and weight balancing. We also
generalize our approach towards the encoding of smart contract transactions that involve an arbitrary number of accounts. Likewise, we rely on real blockchain data to show the savings allowed by our approach. The experimental evaluation is based on
transactions from the Ethereum network and demonstrates cost reduction for both payment transactions and smart contract transactions.
## 2024/1452
* Title: On the Complexity of Cryptographic Groups and Generic Group Models
* Authors: Cong Zhang, Keyu Ji, Taiyu Wang, Bingsheng Zhang, Hong-Sheng Zhou, Xin Wang, Kui Ren
* [Permalink](
https://eprint.iacr.org/2024/1452)
* [Download](
https://eprint.iacr.org/2024/1452.pdf)
### Abstract
Ever since the seminal work of Diffie and Hellman, cryptographic (cyclic) groups have served as a fundamental building block for constructing cryptographic schemes and protocols. The security of these constructions can often be based on the hardness of (
cyclic) group-based computational assumptions. Then, the generic group model (GGM) has been studied as an idealized model (Shoup, EuroCrypt 1997), which justifies the hardness of many (cyclic) group-based assumptions and shows the limits of some group-
based cryptosystems. We stress that, the importance of the length of group encoding, either in a concrete group-based construction or assumption, or in the GGM, has not been studied.
In this work, we initiate a systematic study on the complexity of cryptographic groups and generic group models, varying in different lengths of group encodings, and demonstrate evidences that ``the length matters''. More concretely, we have the
following results:
-- We show that there is no black-box/relativizing reduction from the CDH-secure groups (i.e., over such groups, the computational Diffie-Hellman assumption holds) with shorter encodings, to the CDH-secure groups with longer encodings, within the same
security parameter. More specifically, given any arbitrary longer CDH-secure group, it is impossible to generically shorten the group encoding and obtain a shorter CDH-secure group within the same group order.
-- We show that there is a strict hierarchy of the GGMs with different lengths of encodings. That is, in the framework of indifferentiability, the shorter GGM is strictly stronger than the longer ones, even in the presence of computationally bounded
adversaries.
## 2024/1453
* Title: Breaking and Repairing SQIsign2D-East
* Authors: Wouter Castryck, Mingjie Chen, Riccardo Invernizzi, Gioella Lorenzon, Frederik Vercauteren
* [Permalink](
https://eprint.iacr.org/2024/1453)
* [Download](
https://eprint.iacr.org/2024/1453.pdf)
### Abstract
We present a key recovery attack on SQIsign2D-East that reduces its security level from $\lambda$ to $\lambda/2$. We exploit the fact that each signature leaks a Legendre symbol modulo the secret degree of the private key isogeny. About $\lambda/2$
signatures are enough for these Legendre symbols to fully determine the secret degree, which can then be recovered by exhaustive search over a set of size $O(2^{\lambda/2})$. Once the degree is known, the private key isogeny itself can be found, again by
exhaustive search, in time $\tilde{O}(2^{\lambda/2})$.
We also present a new version of the protocol which does not leak any such information about the private key and show that our modified protocol is more efficient than the original one. Finally, we give a security analysis as well as a new proof of
security.
## 2024/1454
* Title: Interval Key-Encapsulation Mechanism
* Authors: Alexander Bienstock, Yevgeniy Dodis, Paul Rösler, Daniel Wichs
* [Permalink](
https://eprint.iacr.org/2024/1454)
* [Download](
https://eprint.iacr.org/2024/1454.pdf)
### Abstract
Forward-Secure Key-Encapsulation Mechanism (FS-KEM; Canetti et al. Eurocrypt 2003) allows Alice to encapsulate a key $k$ to Bob for some time $t$ such that Bob can decapsulate it at any time $t'\leq t$. Crucially, a corruption of Bob's secret key after
time $t$ does not reveal $k$.
In this work, we generalize and extend this idea by also taking Post-Compromise Security (PCS) into account and call it Interval Key-Encapsulation Mechanism (IKEM). Thus, we do not only protect confidentiality of previous keys against future corruptions
but also confidentiality of future keys against past corruptions. For this, Bob can regularly renew his secret key and inform others about the corresponding public key. IKEM enables Bob to decapsulate keys sent to him over an interval of time extending
into the past, in case senders have not obtained his latest public key; forward security only needs to hold with respect to keys encapsulated before this interval. This basic IKEM variant can be instantiated based on standard KEM, which we prove to be
optimal in terms of assumptions as well as ciphertext and key sizes.
We also extend this notion of IKEM for settings in which Bob decapsulates (much) later than Alice encapsulates (e.g., in high-latency or segmented networks): if a third user Charlie forwards Alice's ciphertext to Bob and, additionally, knows a recently
renewed public key of Bob's, Charlie could re-encrypt the ciphertext for better PCS. We call this extended notion IKEMR. Our first IKEMR construction based on trapdoor permutations has (almost) constant sized ciphertexts in the number of re-encryptions;
and our second IKEMR construction based on FS-PKE has constant sized public keys in the interval size.
Finally, to bypass our lower bound on the IKEM(R) secret key size, which must be linear in the interval size, we develop a new Interval RAM primitive with which Bob only stores a constant sized part of his secret key locally, while outsourcing the rest
to a (possibly adversarial) server.
For all our constructions, we achieve security against active adversaries. For this, we obtain new insights on Replayable CCA security for KEM-type primitives, which might be of independent interest.
## 2024/1455
* Title: Threshold PAKE with Security against Compromise of all Servers
* Authors: Yanqi Gu, Stanislaw Jarecki, Pawel Kedzior, Phillip Nazarian, Jiayu Xu
* [Permalink](
https://eprint.iacr.org/2024/1455)
* [Download](
https://eprint.iacr.org/2024/1455.pdf)
### Abstract
We revisit the notion of threshold Password-Authenticated Key Exchange (tPAKE), and we extend it to augmented tPAKE (atPAKE), which protects password information even in the case all servers are compromised, except for allowing an (inevitable) offline
dictionary attack. Compared to prior notions of tPAKE this is analogous to replacing symmetric PAKE, where the server stores the user's password, with an augmented (or asymmetric) PAKE, like OPAQUE [JKX18], where the server stores a password hash, which
can be used only as a target in an offline dictionary search for the password. An atPAKE scheme also strictly improves on the security of an aPAKE, by secret-sharing the password hash among a set of servers. Indeed, our atPAKE protocol is a natural
realization of threshold OPAQUE.
We formalize atPAKE in the framework of Universal Composability (UC), and show practical ways to realize it. All our schemes are generic compositions which interface to any aPAKE used as a sub-protocol, making them easier to adopt. Our main scheme
relies on threshold Oblivious Pseudorandom Function (tOPRF), and our independent contribution fixes a flaw in the UC tOPRF notion of [JKKX17] and upgrades the tOPRF scheme therein to achieve the fixed definition while preserving its minimal cost and
round complexity. The technique we use enforces implicit agreement on arbitrary context information within threshold computation, and it is of general interest.
## 2024/1456
* Title: Crooked Indifferentiability of the Feistel Construction
* Authors: Alexander Russell, Qiang Tang, Jiadong Zhu
* [Permalink](
https://eprint.iacr.org/2024/1456)
* [Download](
https://eprint.iacr.org/2024/1456.pdf)
### Abstract
The Feistel construction is a fundamental technique for building pseudorandom permutations and block ciphers. This paper shows that a simple adaptation of the construction is resistant, even to algorithm substitution attacks---that is, adversarial
subversion---of the component round functions. Specifically, we establish that a Feistel-based construction with more than $337n/\log(1/\epsilon)$ rounds can transform a subverted random function---which disagrees with the original one at a small
fraction (denoted by $\epsilon$) of inputs---into an object that is \emph{crooked-indifferentiable} from a random permutation, even if the adversary is aware of all the randomness used in the transformation. Here, $n$ denotes the length of both the input
and output of the round functions that underlie the Feistel cipher. We also provide a lower bound showing that the construction cannot use fewer than $2n/\log(1/\epsilon)$ rounds to achieve crooked-indifferentiable security.
## 2024/1457
* Title: A Combined Design of 4-PLL-TRNG and 64-bit CDC-7-XPUF on a Zynq-7020 SoC
* Authors: Oğuz Yayla, Yunus Emre Yılmaz
* [Permalink](
https://eprint.iacr.org/2024/1457)
* [Download](
https://eprint.iacr.org/2024/1457.pdf)
### Abstract
True Random Number Generators (TRNGs) and Physically Unclonable Functions (PUFs) are critical hardware primitives for cryptographic systems, providing randomness and device-specific security. TRNGs require complete randomness, while PUFs rely on
consistent, device-unique responses. In this work, both primitives are implemented on a System-on-Chip Field-Programmable Gate Array (SoC FPGA), leveraging the integrated Phase-Locked Loops (PLLs) for robust entropy generation in PLLbased TRNGs. A novel
backtracking parameter selection algorithm for the TRNG implementation is employed, alongside a methodology to boost data generation rates without compromising entropy. The design is rigorously evaluated using the German BSI AIS-20/31 standards. For the
PUF implementation, the Arbiter PUF, known for its lightweight design and key generation, is enhanced to resist machine learning attacks by implementing a 32-bit and a 64-bit component-differentially challenged XOR Arbiter PUF (CDC-XPUF). These designs
are tested using standard PUF metrics, including uniformity, correctness, and uniqueness. Finally, a combined 4-PLL-TRNG and 64-bit CDC-XPUF design is introduced and evaluated for its suitability in Internet-of-Things (IoT) systems, demonstrating strong
performance in both TRNG and PUF tests. The tests are conducted on the Xilinx Zynq 7020 SoC using a ZC702 evaluation board, confirming the effectiveness of this integrated approach for secure, low-resource applications like IoT.
## 2024/1458
* Title: Providing Integrity for Authenticated Encryption in the Presence of Joint Faults and Leakage
* Authors: Francesco Berti, Itamar Levi
* [Permalink](
https://eprint.iacr.org/2024/1458)
* [Download](
https://eprint.iacr.org/2024/1458.pdf)
### Abstract
Passive (leakage exploitation) and active (fault injection) physical attacks pose a significant threat to cryptographic schemes. Although leakage-resistant cryptography is well studied, there is little work on mode-level security in the presence of joint
faults and leakage exploiting adversaries. In this paper, we focus on integrity for authenticated encryption (AE).
First, we point out that there is an inherent attack in the fault-resilience model presented at ToSC 2023. This shows how fragile the freshness condition of a forgery is when faults are injected into either the tag-generation or the encryption algorithm.
Therefore, we provide new integrity definitions for AE in the presence of leakage and faults, and we follow the atomic model, in which the scheme is divided into atoms (or components, e.g. a call to a block cipher) and allows the adversary to inject a
fault only into the inputs of an atom. We envision this model as a first step for leveled implementations in the faults context, the granularity of atoms can be made finer or coarser (for example, instead of considering a call to a block cipher, we can
consider atoms to be rounds of the block cipher). We hold the underlying belief that it would be easier to protect smaller blocks than a full scheme. The proposed model is very flexible and allows us to understand where to apply faults countermeasures (
in some very interesting cases this model can reduce faults inside atoms to faults on their outputs, as we discuss).
We then show that an AE-scheme using a single call to a highly leakage-protected (and thus very expensive) component, CONCRETE (presented at Africacrypt 2019), maintains integrity in the presence of leakage in both encryption and decryption, and faults
only in decryption.On the other hand, a single fault in encryption is enough to forge. Therefore, we first introduce a weaker definition (which restricts the meaning of freshness), weak integrity, which CONCRETE achieves even if the adversary can
introduce faults in the encryption queries (with some restrictions on the number and type of faults). Finally, we provide a variant, CONCRETE2, which is slightly more computationally expensive, but still uses a single call to a strongly protected
component that provides integrity in the presence of leakage and faults.
## 2024/1459
* Title: Verifiable Oblivious Pseudorandom Functions from Lattices: Practical-ish and Thresholdisable
* Authors: Martin R. Albrecht, Kamil Doruk Gur
* [Permalink](
https://eprint.iacr.org/2024/1459)
* [Download](
https://eprint.iacr.org/2024/1459.pdf)
### Abstract
We revisit the lattice-based verifiable oblivious PRF construction from PKC'21 and remove or mitigate its central three sources of inefficiency. First, applying Rényi divergence arguments, we eliminate one superpolynomial factor from the ciphertext
modulus \(q\), allowing us to reduce the overall bandwidth consumed by RLWE samples by about a factor of four. This necessitates us introducing intermediate unpredictability notions to argue PRF security of the final output in the Random Oracle model.
Second, we remove the reliance on the \(\mathsf{1D-SIS}\) assumption, which reduces another superpolynomial factor, albeit to a factor that is still superpolynomial. Third, by applying the state-of-the-art in zero-knowledge proofs for lattice statements,
we achieve a reduction in bandwidth of several orders of magnitude for this material.
Finally, we give a \(t\)-out-of-\(n\) threshold variant of the VOPRF for constant \(t\) and with trusted setup, based on a \(n\)-out-of-\(n\) distributed variant of the VOPRF (and without trusted setup).
## 2024/1460
* Title: PPSA: Polynomial Private Stream Aggregation for Time-Series Data Analysis
* Authors: Antonia Januszewicz, Daniela Medrano Gutierrez, Nirajan Koirala, Jiachen Zhao, Jonathan Takeshita, Jaewoo Lee, Taeho Jung
* [Permalink](
https://eprint.iacr.org/2024/1460)
* [Download](
https://eprint.iacr.org/2024/1460.pdf)
### Abstract
Modern data analytics requires computing functions on streams of data points from many users that are challenging to calculate, due to both the high scale and nontrivial nature of the computation at hand. The need for data privacy complicates this matter
further, as general-purpose privacy-enhancing technologies face limitations in at least scalability or utility. Existing work has attempted to improve this by designing purpose-built protocols for the use case of Private Stream Aggregation; however,
prior work lacks the ability to compute more general aggregative functions without the assumption of trusted parties or hardware.
In this work, we present PPSA, a protocol that performs Private Polynomial Stream Aggregation, allowing the private computation of any polynomial function on user data streams even in the presence of an untrusted aggregator. Unlike previous state-of-the-
art approaches, PPSA enables secure aggregation beyond simple summations without relying on trusted hardware; it utilizes only tools from cryptography and differential privacy. Our experiments show that PPSA has low latency during the encryption and
aggregation processes with an encryption latency of 10.5 ms and aggregation latency of 21.6 ms for 1000 users, which are up to 138$\times$ faster than the state-of-the-art prior work.
## 2024/1461
* Title: Detecting and Correcting Computationally Bounded Errors: A Simple Construction Under Minimal Assumptions
* Authors: Jad Silbak, Daniel Wichs
* [Permalink](
https://eprint.iacr.org/2024/1461)
* [Download](
https://eprint.iacr.org/2024/1461.pdf)
### Abstract
We study error detection and error correction in a computationally bounded world, where errors are introduced by an arbitrary polynomial time adversarial channel. We consider codes where the encoding procedure uses random coins and define two distinct
variants: (1) in randomized codes, fresh randomness is chosen during each encoding operation and is unknown a priori, while (2) in self-seeded codes, the randomness of the encoding procedure is fixed once upfront and is known to the adversary. In both
cases, the randomness need not be known to the decoding procedure, and there is no trusted common setup between the encoder and decoder. The encoding and decoding algorithms are efficient and run in some fixed polynomial time, independent of the run
time of the adversary.
The parameters of standard codes for worst-case (inefficient) errors are limited by the Singleton bound: for rate $R$ it is not possible to detect more than a $1-R$ fraction of errors, or uniquely correct more than a $(1-R)/2$ fraction of errors, and
efficient codes matching this bound exist for sufficiently large alphabets. In the computationally bounded setting, we show that going beyond the Singleton bound implies one-way functions in the case of randomized codes and collision-resistant hash
functions in the case of self-seeded codes. We construct randomized and self-seeded codes under these respective minimal assumptions with essentially optimal parameters over a constant-sized alphabet:
- Detection: the codes have a rate $R \approx 1$ while detecting a $\rho \approx 1$ fraction of errors.
- Correction: for any $\rho < 1/2$, the codes uniquely correct a $\rho$ fraction of errors with rate $R \approx 1-\rho$.
Codes for computationally bounded errors were studied in several prior works starting with Lipton (STACS '94), but all such works either: (a) need some trusted common setup (e.g., public-key infrastructure, common reference string) between the encoder
and decoder, or (b) only handle channels whose complexity is a prior bounded below that of the code.
## 2024/1462
* Title: Efficient Fuzzy Private Set Intersection from Fuzzy Mapping
* Authors: Ying Gao, Lin Qi, Xiang Liu, Yuanchao Luo, Longxin Wang
* [Permalink](
https://eprint.iacr.org/2024/1462)
* [Download](
https://eprint.iacr.org/2024/1462.pdf)
### Abstract
Private set intersection (PSI) allows Sender holding a set \(X\) and Receiver holding a set \(Y\) to compute only the intersection \(X\cap Y\) for Receiver. We focus on a variant of PSI, called fuzzy PSI (FPSI), where Receiver only gets points in \(X\)
that are at a distance not greater than a threshold from some points in \(Y\).
Most current FPSI approaches first pick out pairs of points that are potentially close and then determine whether the distance of each selected pair is indeed small enough to yield FPSI result. Their complexity bottlenecks stem from the excessive number
of point pairs selected by the first picking process. Regarding this process, we consider a more general notion, called fuzzy mapping (Fmap), which can map each point of two parties to a set of identifiers, with closely located points having a same
identifier, which forms the selected point pairs.
We initiate the formal study on Fmap and show novel Fmap instances for Hamming and \(L_\infty\) distances to reduce the number of selected pairs. We demonstrate the powerful capability of Fmap with some superior properties in constructing FPSI variants
and provide a generic construction from Fmap to FPSI.
Our new Fmap instances lead to the fastest semi-honest secure FPSI protocols in high-dimensional space to date, for both Hamming and general \(L_{\mathsf p\in [1, \infty]}\) distances. For Hamming distance, our protocol is the first one that achieves
strict linear complexity with input sizes. For \(L_{\mathsf p\in [1, \infty]}\) distance, our protocol is the first one that achieves linear complexity with input sizes, dimension, and threshold.
## 2024/1463
* Title: Asynchronous Verifiable Secret Sharing with Elastic Thresholds and Distributed Key Generation
* Authors: Junming Li, Zhi Lu, Renfei Shen, Yuanqing Feng, Songfeng Lu
* [Permalink](
https://eprint.iacr.org/2024/1463)
* [Download](
https://eprint.iacr.org/2024/1463.pdf)
### Abstract
Distributed Key Generation (DKG) is a technique that enables the generation of threshold cryptography keys among a set of mutually untrusting nodes. DKG generates keys for a range of decentralized applications such as threshold signatures, multiparty
computation, and Byzantine consensus. Over the past five years, research on DKG has focused on optimizing network communication protocols to improve overall system efficiency by reducing communication complexity. However, SOTA asynchronous distributed
key generation (ADKG) schemes (e.g., Kokoris-Kogias ADKG, CCS 2020 and Das ADKG, S\&P 2022, and others) only support recovery thresholds of either $f$ or $2f$, where $f$ is the maximum number of malicious nodes. This paper proposes an asynchronous
verifiable secret sharing protocol featuring an elastic threshold, where $t \in [f,n-f-1]$ and $n \ge 3f+1$ is the total number of parties. Our protocol enables a dealer to share up to $t+1$ secrets with a total communication cost of O($\lambda n^3$),
where $\lambda$ is the security parameter, and the protocol relies on the hardness of the $q$-SDH problem. We further modified the Schnorr protocol to enable simultaneous commitments to multiple secrets, which we refer to $m$-Schnorr.
## 2024/1464
* Title: SoK: Descriptive Statistics Under Local Differential Privacy
* Authors: René Raab, Pascal Berrang, Paul Gerhart, Dominique Schröder
* [Permalink](
https://eprint.iacr.org/2024/1464)
* [Download](
https://eprint.iacr.org/2024/1464.pdf)
### Abstract
Local Differential Privacy (LDP) provides a formal guarantee of privacy that enables the collection and analysis of sensitive data without revealing any individual's data. While LDP methods have been extensively studied, there is a lack of a systematic
and empirical comparison of LDP methods for descriptive statistics. In this paper, we first provide a systematization of LDP methods for descriptive statistics, comparing their properties and requirements. We demonstrate that several mean estimation
methods based on sampling from a Bernoulli distribution are equivalent in the one-dimensional case and introduce methods for variance estimation. We then empirically compare methods for mean, variance, and frequency estimation. Finally, we provide
recommendations for the use of LDP methods for descriptive statistics and discuss their limitations and open questions.
## 2024/1465
* Title: Linear approximations of the Flystel construction
* Authors: Tim Beyne, Clémence Bouvier
* [Permalink](
https://eprint.iacr.org/2024/1465)
* [Download](
https://eprint.iacr.org/2024/1465.pdf)
### Abstract
Using a purity theorem for exponential sums due to Rojas-Léon, we upper bound the absolute correlations of linear approximations of the Flystel construction used in Anemoi. This resolves open problem 7.1 in [Bouvier, 2023].
## 2024/1466
* Title: Dishonest Majority Constant-Round MPC with Linear Communication from DDH
* Authors: Vipul Goyal, Junru Li, Ankit Kumar Misra, Rafail Ostrovsky, Yifan Song, Chenkai Weng
* [Permalink](
https://eprint.iacr.org/2024/1466)
* [Download](
https://eprint.iacr.org/2024/1466.pdf)
### Abstract
In this work, we study constant round multiparty computation (MPC) for Boolean circuits against a fully malicious adversary who may control up to $n-1$ out of $n$ parties. Without relying on fully homomorphic encryption (FHE), the best-known results in
this setting are achieved by Wang et al. (CCS 2017) and Hazay et al. (ASIACRYPT 2017) based on garbled circuits, which require a quadratic communication in the number of parties $O(|C|\cdot n^2)$. In contrast, for non-constant round MPC, the recent
result by Rachuri and Scholl (CRYPTO 2022) has achieved linear communication $O(|C|\cdot n)$.
In this work, we present the first concretely efficient constant round MPC protocol in this setting with linear communication in the number of parties $O(|C|\cdot n)$. Our construction can be based on any public-key encryption scheme that is linearly
homomorphic for public keys. Our work gives a concrete instantiation from a variant of the El-Gamal Encryption Scheme assuming the DDH assumption. The analysis shows that when the computational security parameter $\lambda=128$ and statistical security
parameter $\kappa=80$, our protocol achieves a smaller communication than Wang et al. (CCS 2017) when there are $16$ parties for AES circuit and $8$ parties for general Boolean circuits (where we assume that the numbers of AND gates and XOR gates are the
same). When comparing with the recent work by Beck et al. (CCS 2023) that achieves constant communication complexity $O(|C|)$ in the strong honest majority setting ($t<(1/2-\epsilon)n$ where $\epsilon$ is a constant), our protocol is better as long as $n<
3500$ (when $t=n/4$ for their work).
## 2024/1467
* Title: P2C2T: Preserving the Privacy of Cross-Chain Transfer
* Authors: Panpan Han, Zheng Yan, Laurence T. Yang, Elisa Bertino
* [Permalink](
https://eprint.iacr.org/2024/1467)
* [Download](
https://eprint.iacr.org/2024/1467.pdf)
### Abstract
[continued in next message]
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)