• [digest] 2024 Week 41 (1/4)

    From IACR ePrint Archive@21:1/5 to All on Mon Oct 14 02:29:48 2024
    ## In this issue

    1. [2024/334] The Impact of Reversibility on Parallel Pebbling
    2. [2024/563] A Note on Related-Tweakey Impossible Differential ...
    3. [2024/867] Optimal Traitor Tracing from Pairings
    4. [2024/1582] Halving differential additions on Kummer lines
    5. [2024/1591] MPC-in-the-Head Framework without Repetition and ...
    6. [2024/1595] DeepFold: Efficient Multilinear Polynomial ...
    7. [2024/1596] Secret Sharing with Publicly Verifiable Deletion
    8. [2024/1597] An undetectable watermark for generative image models
    9. [2024/1598] On the security of the initial tropical Stickel ...
    10. [2024/1599] Simplified PIR and CDS Protocols and Improved ...
    11. [2024/1600] Pacmann: Efficient Private Approximate Nearest ...
    12. [2024/1601] Juggernaut: Efficient Crypto-Agnostic Byzantine ...
    13. [2024/1602] Cryptography and Collective Power
    14. [2024/1603] Boosting SNARKs and Rate-1 Barrier in Arguments of ...
    15. [2024/1604] Predicting truncated multiple matrix congruential ...
    16. [2024/1605] Nebula: Efficient read-write memory and switchboard ...
    17. [2024/1606] NeutronNova: Folding everything that reduces to ...
    18. [2024/1607] Tighter Proofs for PKE-to-KEM Transformation in the ...
    19. [2024/1608] Mild Asymmetric Message Franking: Illegal-Messages- ...
    20. [2024/1609] Blaze: Fast SNARKs from Interleaved RAA Codes
    21. [2024/1610] Secret Sharing with Snitching
    22. [2024/1611] Rhombus: Fast Homomorphic Matrix-Vector ...
    23. [2024/1612] On Wagner's k-Tree Algorithm Over Integers
    24. [2024/1613] Efficient Maliciously Secure Oblivious Exponentiations
    25. [2024/1614] Related-Key Cryptanalysis of FUTURE
    26. [2024/1615] LeOPaRd: Towards Practical Post-Quantum Oblivious ...
    27. [2024/1616] End-to-End Encrypted Cloud Storage in the Wild: A ...
    28. [2024/1617] Algebraic Equipage for Learning with Errors in ...
    29. [2024/1618] Shaking up authenticated encryption
    30. [2024/1619] Structure-Preserving Compressing Primitives: Vector ...
    31. [2024/1620] Really Complex Codes with Application to STARKs
    32. [2024/1621] PAKE Combiners and Efficient Post-Quantum ...
    33. [2024/1622] A New Approach Towards Encrypted Data Sharing and ...
    34. [2024/1623] General Functional Bootstrapping using CKKS
    35. [2024/1624] Double-Matrix: Complete Diffusion in a Single Round ...
    36. [2024/1625] On the Tight Security of the Double Ratchet
    37. [2024/1626] Faster Proofs and VRFs from Isogenies
    38. [2024/1627] Lollipops of pairing-friendly elliptic curves for ...
    39. [2024/1628] Glacius: Threshold Schnorr Signatures from DDH with ...
    40. [2024/1629] Efficient Key-Switching for Word-Type FHE and GPU ...
    41. [2024/1630] Hybrid Password Authentication Key Exchange in the ...
    42. [2024/1631] Sparrow: Space-Efficient zkSNARK for Data-Parallel ...
    43. [2024/1632] Fully Secure Searchable Encryption from PRFs, ...
    44. [2024/1633] Efficient Boolean-to-Arithmetic Mask Conversion in ...
    45. [2024/1634] On Constructing Pseudorandom Involutions: Feistel ...
    46. [2024/1635] RPO-M31 and XHash-M31: Efficient Hash Functions for ...
    47. [2024/1636] Quantum State Group Actions
    48. [2024/1637] Bootstrapping Small Integers With CKKS
    49. [2024/1638] Modular Reduction in CKKS
    50. [2024/1639] Efficient Quantum Pseudorandomness from Hamiltonian ...
    51. [2024/1640] Maximizing the Utility of Cryptographic Setups: ...
    52. [2024/1641] Simplification Issues of An Authentication and Key ...
    53. [2024/1642] Fuzzy PSI via Oblivious Protocol Routing
    54. [2024/1643] Optimizing Liveness for Blockchain-Based Sealed-Bid ...
    55. [2024/1644] A Tight Lower Bound on the TdScrypt Trapdoor ...
    56. [2024/1645] Fiat Shamir Goes Rational
    57. [2024/1646] Transaction Execution Mechanisms
    58. [2024/1647] Curve Forests: Transparent Zero-Knowledge Set ...
    59. [2024/1648] SIMD-style Sorting of Integer Sequence in RLWE ...
    60. [2024/1649] Multiplying Polynomials without Powerful ...
    61. [2024/1650] Towards Practical Oblivious Map
    62. [2024/1651] One-Shot Native Proofs of Non-Native Operations in ...

    ## 2024/334

    * Title: The Impact of Reversibility on Parallel Pebbling
    * Authors: Jeremiah Blocki, Blake Holman, Seunghoon Lee
    * [Permalink](https://eprint.iacr.org/2024/334)
    * [Download](https://eprint.iacr.org/2024/334.pdf)

    ### Abstract

    The (parallel) classical black pebbling game is a helpful abstraction which allows us to analyze the resources (time, space, space-time, cumulative space) necessary to evaluate a function $f$ with a static data-dependency graph $G$ on a (parallel)
    computer. In particular, the parallel black pebbling game has been used as a tool to quantify the (in)security of Data-Independent Memory-Hard Functions (iMHFs). However, the classical black pebbling game is not suitable to analyze the cost of quantum
    preimage attack. Thus, Blocki et al. (TCC 2022) introduced the parallel reversible pebbling game as a tool to analyze resource requirements for a quantum computer. While there is an extensive line of work analyzing pebbling complexity in the (parallel)
    black pebbling game, comparatively little is known about the parallel reversible pebbling game. Our first result is a lower bound of $\Omega\left(N^{1+\sqrt{\frac{ 2-o(1)}{\log N}}} \right)$ on the reversible cumulative pebbling cost for a line graph on $
    N$ nodes. This yields a separation between classical and reversible pebbling costs demonstrating that the reversibility constraint can increase cumulative pebbling costs (and space-time costs) by a multiplicative factor of $N^{(\sqrt 2 + o(1))/\sqrt{\log
    N}}$ --- the classical pebbling cost (space-time or cumulative) for a line graph is just $\mathcal{O}(N)$. On the positive side, we prove that any classical parallel pebbling can be transformed into a reversible pebbling strategy whilst increasing space-
    time (resp. cumulative memory) costs by a multiplicative factor of at most $\mathcal{O}\left(N^{\sqrt{\frac{8}{\log N}}}\right)$ (resp. $\mathcal{O}\left(N^{\mathcal{O}(1)/\sqrt[4]{\log N}}\right)$). We also analyze the impact of the reversibility
    constraint on the cumulative pebbling cost of depth-robust and depth-reducible DAGs exploiting reversibility to improve constant factors in a prior lower bound of Alwen et al. (EUROCRYPT 2017). For depth-reducible DAGs we show that the state-of-the-art
    recursive pebbling techniques of Alwen et al. (EUROCRYPT 2017) can be converted into a recursive reversible pebbling attack without any asymptotic increases in pebbling costs. Finally, we extend a result of Blocki et al. (ITCS 2020) to show that it is
    Unique Games hard to approximate the reversible cumulative pebbling cost of a DAG $G$ to within any constant factor.



    ## 2024/563

    * Title: A Note on Related-Tweakey Impossible Differential Attacks
    * Authors: Xavier Bonnetain, Virginie Lallemand
    * [Permalink](https://eprint.iacr.org/2024/563)
    * [Download](https://eprint.iacr.org/2024/563.pdf)

    ### Abstract

    In this short note we review the technique proposed at ToSC 2018 by Sadeghi et al. for attacks built upon several related-tweakey impossible differential trails. We show that the initial encryption queries are improper and lead the authors to
    misevaluating a filtering value in the key recovery phase. We identified 4 papers (from Eurocrypt, DCC, ToSC and ePrint) that follow on the results of Sadeghi et al., and in three of them the issue was propagated.
    We thus present a careful analysis of these types of attacks and give generic complexity formulas similar to the ones proposed by Boura et al. at Asiacrypt 2014. We apply these to the aforementioned papers and provide patched versions of their attacks.
    The main consequence is an increase in the memory complexity. We show that in many cases (a notable exception being quantum impossible differentials) it is possible to recover the numeric estimates of the flawed analysis, and in all cases we were able to
    build a correct attack reaching the same number of rounds.



    ## 2024/867

    * Title: Optimal Traitor Tracing from Pairings
    * Authors: Mark Zhandry
    * [Permalink](https://eprint.iacr.org/2024/867)
    * [Download](https://eprint.iacr.org/2024/867.pdf)

    ### Abstract

    We use pairings over elliptic curves to give a collusion-resistant traitor tracing scheme where the sizes of public keys, secret keys, and ciphertexts are independent of the number of users. Prior constructions from pairings had size $\Omega(N^{1/3})$.
    An additional consequence of our techniques is general result showing that attribute-based encryption for circuits generically implies optimal traitor tracing.



    ## 2024/1582

    * Title: Halving differential additions on Kummer lines
    * Authors: Damien Robert, Nicolas Sarkis
    * [Permalink](https://eprint.iacr.org/2024/1582)
    * [Download](https://eprint.iacr.org/2024/1582.pdf)

    ### Abstract

    We study differential additions formulas on Kummer lines that factorize through a degree $2$ isogeny $\phi$. We call the resulting formulas half differential additions: from the knowledge of $\phi(P), \phi(Q)$ and $P-Q$, the half differential addition
    allows to recover $P+Q$. We explain how Mumford's theta group theory allows, in any model of Kummer lines, to find a basis of the half differential relations. This involves studying the dimension $2$ isogeny $(P, Q) \mapsto (P+Q, P-Q)$.

    We then use the half differential addition formulas to build a new type of Montgomery ladder, called the half-ladder, using a time-memory trade-off. On a Montgomery curve with full rational $2$-torsion, our half ladder first build a succession of isogeny
    images $P_i=\phi_i(P_{i-1})$, which only depends on the base point $P$ and not the scalar $n$, for a pre-computation cost of $2S+1m_0$ by bit. Then we use half doublings and half differential additions to compute any scalar multiplication $n \cdot P$,
    for a cost of $4M+2S+1m_0$ by bit. The total cost is then $4M+4S+2m_0$, even when the base point $P$ is not normalized. By contrast, the usual Montgomery ladder costs $4M+4S+1m+1m_0$ by bit, for a normalized point.

    In the appendix, we extend our approach to higher dimensional ladders in theta coordinates or twisted theta coordinates. In dimension~$2$, after a precomputation step which depends on the base point~$P$, our half ladder only costs $7M + 4S+3m_0$,
    compared to $10M+9S+6m_0$ for the standard ladder.



    ## 2024/1591

    * Title: MPC-in-the-Head Framework without Repetition and its Applications to the Lattice-based Cryptography
    * Authors: Weihao Bai, Long Chen, Qianwen Gao, Zhenfeng Zhang
    * [Permalink](https://eprint.iacr.org/2024/1591)
    * [Download](https://eprint.iacr.org/2024/1591.pdf)

    ### Abstract

    The MPC-in-the-Head framework has been pro-
    posed as a solution for Non-Interactive Zero-Knowledge Arguments of Knowledge (NIZKAoK) due to its efficient proof generation. However, most existing NIZKAoK constructions using this approach require multiple MPC evaluations to achieve negligible
    soundness error, resulting in proof size and time that are asymptotically at least λ times the size of the circuit of the NP relation. In this paper, we propose a novel method to eliminate the need for repeated MPC evaluations, resulting in a NIZKAoK
    protocol for any NP relation that we call Diet. The proof size and time of Diet are asymptotically only polylogarithmic with respect to the size of the circuit C of the NP relation but are independent of the security parameter λ. Hence, both the proof
    size and time can be significantly reduced.

    Moreover, Diet offers promising concrete efficiency for proving Learning With Errors (LWE) problems and its variants. Our solution provides significant advantages over other schemes in terms of both proof size and proof time, when considering both
    factors together. Specifically, Diet is a promising method for proving knowledge of secret keys for lattice-based key encapsulation mechanisms (KEMs) such as Frodo and Kyber, offering a practical solution to future post-quantum certificate management.
    For Kyber 512, our implementation achieves an online proof size of 83.65 kilobytes (KB) with a preprocessing overhead of 152.02KB. The implementation is highly efficient, with an online proof time of only 0.68 seconds and a preprocessing time of 0.81
    seconds. Notably, our approach provides the first reported implementation of proving knowledge of secret keys for Kyber 512 using post-quantum primitives-based zero-knowledge proofs.



    ## 2024/1595

    * Title: DeepFold: Efficient Multilinear Polynomial Commitment from Reed-Solomon Code and Its Application to Zero-knowledge Proofs
    * Authors: Yanpei Guo, Xuanming Liu, Kexi Huang, Wenjie Qu, Tianyang Tao, Jiaheng Zhang
    * [Permalink](https://eprint.iacr.org/2024/1595)
    * [Download](https://eprint.iacr.org/2024/1595.pdf)

    ### Abstract

    This work presents Deepfold, a novel multilinear polynomial commitment scheme (PCS) based on Reed-Solomon code that offers optimal prover time and a more concise proof size. For the first time, Deepfold adapts the FRI-based multilinear PCS to the list
    decoding radius setting, requiring significantly fewer query repetitions and thereby achieving a 3× reduction in proof size compared to Basefold (Crypto'24), while preserving its advantages in prover time. Compared with PolyFRIM (USENIX Security'24),
    Deepfold achieves a 2× improvement in prover time, verifier time, and proof size. Another contribution of this work is a batch evaluation scheme, which enables the FRI-based multilinear PCS to handle polynomials encoded from inputs of arbitrary length
    without additional padding overhead.

    Our scheme has broad applications in zk-SNARKs, since PCS is a key component in modern zk-SNARK constructions. For example, when replacing the PCS component of Virgo (S&P'20) with Deepfold, our scheme achieves a 2.5× faster prover time when proving the
    knowledge of a Merkle tree with 256 leaves, while maintaining the similar proof size. When replacing the PCS component of HyperPlonk (Eurocrypt'23) with Deepfold, our scheme has about 3.6× faster prover time. Additionally, when applying our arbitrary
    length input commitment to verifiable matrix multiplications for matrices of size 1200×768 and 768×2304, which are actual use cases in GPT-2 model, the performance showcases a
    2.4× reduction in prover time compared to previous approaches.



    ## 2024/1596

    * Title: Secret Sharing with Publicly Verifiable Deletion
    * Authors: Jonathan Katz, Ben Sela
    * [Permalink](https://eprint.iacr.org/2024/1596)
    * [Download](https://eprint.iacr.org/2024/1596.pdf)

    ### Abstract

    Certified deletion, an inherently quantum capability, allows a party holding a quantum state to prove that they have deleted the information contained in that state. Bartusek and Raizes recently studied certified deletion in the context of secret sharing
    schemes, and showed constructions with privately verifiable proofs of deletion that can be verified only by the dealer who generated the shares. We give two constructions of secret sharing schemes with publicly verifiable certified deletion. Our first
    construction is based on the post-quantum security of the LWE problem, and each share requires a number of qubits that is linear in the size of an underlying classical secret sharing scheme for the same set of authorized parties. Our second construction
    is based on a more general assumption—the existence of post quantum one-way functions— but requires an asymptotically larger number of qubits relative to the share size of the underlying classical scheme.



    ## 2024/1597

    * Title: An undetectable watermark for generative image models
    * Authors: Sam Gunn, Xuandong Zhao, Dawn Song
    * [Permalink](https://eprint.iacr.org/2024/1597)
    * [Download](https://eprint.iacr.org/2024/1597.pdf)

    ### Abstract

    We present the first undetectable watermarking scheme for generative image models. Undetectability ensures that no efficient adversary can distinguish between watermarked and un-watermarked images, even after making many adaptive queries. In particular,
    an undetectable watermark does not degrade image quality under any efficiently computable metric. Our scheme works by selecting the initial latents of a diffusion model using a pseudorandom error-correcting code (Christ and Gunn, 2024), a strategy which
    guarantees undetectability and robustness.

    We experimentally demonstrate that our watermarks are quality-preserving and robust using Stable Diffusion 2.1. Our experiments verify that, in contrast to every prior scheme we tested, our watermark does not degrade image quality. Our experiments also
    demonstrate robustness: existing watermark removal attacks fail to remove our watermark from images without significantly degrading the quality of the images. Finally, we find that we can robustly encode 512 bits in our watermark, and up to 2500 bits
    when the images are not subjected to watermark removal attacks. Our code is available at https://github.com/XuandongZhao/PRC-Watermark.



    ## 2024/1598

    * Title: On the security of the initial tropical Stickel protocol and its modification based on Linde-de la Puente matrices
    * Authors: Sulaiman Alhussaini, Serge˘ı Sergeev
    * [Permalink](https://eprint.iacr.org/2024/1598)
    * [Download](https://eprint.iacr.org/2024/1598.pdf)

    ### Abstract

    Recently, a more efficient attack on the initial tropical Stickel protocol has been proposed, different from the previously known Kotov-Ushakov attack, yet equally guaranteed to succeed. Given that the Stickel protocol can be implemented in various ways,
    such as utilizing platforms beyond the tropical semiring or employing alternative commutative matrix ``classes'' instead of polynomials, we firstly explore the generalizability of this new attack across different implementations of the Stickel protocol.
    We then conduct a comprehensive security analysis of a tropical variant that successfully resists this new attack, namely the Stickel protocol based on Linde-de la Puente (LdlP) matrices. Additionally, we extend the concept of LdlP matrices beyond the
    tropical semiring, generalizing it to a broader class of semirings.



    ## 2024/1599

    * Title: Simplified PIR and CDS Protocols and Improved Linear Secret-Sharing Schemes
    * Authors: Bar Alon, Amos Beimel, Or Lasri
    * [Permalink](https://eprint.iacr.org/2024/1599)
    * [Download](https://eprint.iacr.org/2024/1599.pdf)

    ### Abstract

    We consider 3 related cryptographic primitives, private information retrieval (PIR) protocols, conditional disclosure of secrets (CDS) protocols, and secret-sharing schemes; these primitives have many applications in cryptography. We study these
    primitives requiring information-theoretic security. The complexity of these primitives has been dramatically improved in the last few years are they are closely related, i.e., the the 2-server PIR protocol of Dvir and Gopi (J. ACM 2016) was transformed
    to construct the CDS protocols of Liu, Vaikuntanathan, and Wee (CRYPTO 2017, Eurocrypt 2018) and these CDS protocols are the main ingredient in the construction of the best known secret-sharing schemes.
    To date, the messages size required in PIR and CDS protocols and the share size required in secret-sharing schemes is not understood and there are big gaps between their upper bounds and lower bounds. The goal of this paper is to try to better
    understand the upper bounds by simplifying current constructions and improving their complexity.
    We obtain the following two independent results:
    - We simplify, abstract, and generalize the 2-server PIR protocol of
    Dvir and Gopi (J. ACM 2016) and the 2-server and multi-server
    CDS protocols of Liu et al. (CRYPTO 2017, Eurocrypt 2018) and
    Beimel, Farr`as, and Lasri (TCC 2023). This is done by considering
    a new variant of matching vectors and by using a general share
    conversion. In addition to simplifying previous protocols, our
    protocols can use matching vectors over any $m$ that is product
    of two distinct primes.
    Our construction does not improve the communication
    complexity of PIR and CDS protocols; however, construction of
    better matching vectors over any $m$ that is product of
    two distinct primes will improve their communication complexity.

    - In many applications of secret-sharing schemes it is important that
    the scheme is linear, e.g., by using the fact that parties can locally
    add shares of two secrets and obtain shares of the sum of the
    secrets. We provide a construction of linear secret-sharing
    schemes for $n$-party access structures with improved share size
    of $2^{0.7563n}$. Previously, the best share size for linear secret-
    sharing schemes was $2^{0.7576n}$ and it is known that for most
    $n$-party access structures the shares size is at least $2^{0.5n}$.
    This results is achieved by a reduction to unbalanced CDS
    protocols (compared to balanced CDS protocols in previous
    constructions).



    ## 2024/1600

    * Title: Pacmann: Efficient Private Approximate Nearest Neighbor Search
    * Authors: Mingxun Zhou, Elaine Shi, Giulia Fanti
    * [Permalink](https://eprint.iacr.org/2024/1600)
    * [Download](https://eprint.iacr.org/2024/1600.pdf)

    ### Abstract

    We propose a new private Approximate Nearest Neighbor (ANN) search scheme named Pacmann that allows a client to perform ANN search in a vector database without revealing the query vector to the server. Unlike prior constructions that run encrypted search
    on the server side, Pacmann carefully offloads limited computation and storage to the client, no longer requiring computationally-intensive cryptographic techniques. Specifically, clients run a graph-based ANN search, where in each hop on the graph, the
    client privately retrieves local graph information from the server. To make this efficient, we combine two ideas: (1) we adapt a leading graph-based ANN search algorithm to be compatible with private information retrieval (PIR) for subgraph retrieval; (2)
    we use a recent class of PIR schemes that trade offline preprocessing for online computational efficiency. Pacmann achieves significantly better search quality than the state-of-the-art private ANN search schemes, showing up to 2.5$\times$ better search
    accuracy on real-world datasets than prior work and reaching 90\% quality of a state-of-the-art non-private ANN algorithm. Moreover on large datasets with up to 100 million vectors, Pacmann shows better scalability than prior private ANN schemes
    with up to 2.6$\times$ reduction in computation time and 1.3$\times$ reduction in overall latency.



    ## 2024/1601

    * Title: Juggernaut: Efficient Crypto-Agnostic Byzantine Agreement
    * Authors: Daniel Collins, Yuval Efron, Jovan Komatovic
    * [Permalink](https://eprint.iacr.org/2024/1601)
    * [Download](https://eprint.iacr.org/2024/1601.pdf)

    ### Abstract

    It is well known that a trusted setup allows one to solve the Byzantine agreement problem in the presence of $t<n/2$ corruptions, bypassing the setup-free $t<n/3$ barrier. Alas, the overwhelming majority of protocols in the literature have the caveat
    that their security crucially hinges on the security of the cryptography and setup, to the point where if the cryptography is broken, even a single corrupted party can violate the security of the protocol. Thus these protocols provide higher corruption
    resilience ($n/2$ instead of $n/3$) for the price of increased assumptions. Is this trade-off necessary?

    We further the study of crypto-agnostic Byzantine agreement among $n$ parties that answers this question in the negative. Specifically, let $t_s$ and $t_i$ denote two parameters such that (1) $2t_i + t_s < n$, and (2) $t_i \leq t_s < n/2$. Crypto-
    agnostic Byzantine agreement ensures agreement among honest parties if (1) the adversary is computationally bounded and corrupts up to $t_s$ parties, or (2) the adversary is computationally unbounded and corrupts up to $t_i$ parties, and is moreover
    given all secrets of all parties established during the setup. We propose a compiler that transforms any pair of resilience-optimal Byzantine agreement protocols in the authenticated and information-theoretic setting into one that is crypto-agnostic. Our
    compiler has several attractive qualities, including using only $O(\lambda n^2)$ bits over the two underlying Byzantine agreement protocols, and preserving round and communication complexity in the authenticated setting. In particular, our results
    improve the state-of-the-art in bit complexity by at least two factors of $n$ and provide either early stopping (deterministic) or expected constant round complexity (randomized). We therefore provide fallback security for authenticated Byzantine
    agreement for free for $t_i \leq n/4$.



    ## 2024/1602

    * Title: Cryptography and Collective Power
    * Authors: Leah Namisa Rosenbloom
    * [Permalink](https://eprint.iacr.org/2024/1602)
    * [Download](https://eprint.iacr.org/2024/1602.pdf)

    ### Abstract

    This paper extends the dialogue of "The Moral Character of Cryptographic Work" (Rogaway, 2015) and "Crypto for the People" (Kamara, 2020) by examining the relationship between cryptography and collective power. In particular, it considers cryptography in
    the context of grassroots organizing—a process by which marginalized people build collective power toward effecting systemic change—and illustrates the ways in which cryptography has both helped and hindered organizing efforts. Based on the synthesis
    of dozens of qualitative studies, scholarly critiques, and historical artifacts, this work introduces a paradigm shift for cryptographic protocol design—general principles and recommendations for building cryptography to address the lived needs and
    experiences of marginalized people. Finally, it calls for abolition cryptography: cryptographic theories and practices which dismantle harmful systems and replace them with systems that sustain human lives and livelihoods.



    ## 2024/1603

    * Title: Boosting SNARKs and Rate-1 Barrier in Arguments of Knowledge
    * Authors: Jiaqi Cheng, Rishab Goyal
    * [Permalink](https://eprint.iacr.org/2024/1603)
    * [Download](https://eprint.iacr.org/2024/1603.pdf)

    ### Abstract

    We design a generic compiler to boost any non-trivial succinct non-interactive argument of knowledge (SNARK) to full succinctness. Our results come in two flavors:

    For any constant $\epsilon > 0$, any SNARK with proof size $|\pi| < \frac{|\omega|}{\lambda^\epsilon} + \mathsf{poly}(\lambda, |x|)$ can be upgraded to a fully succinct SNARK, where all system parameters (such as proof/CRS sizes and setup/verifier run-
    times) grow as fixed polynomials in $\lambda$, independent of witness size.

    Under an additional assumption that the underlying SNARK has as an \emph{efficient} knowledge extractor, we further improve our result to upgrade any non-trivial SNARK. For example, we show how to design fully succinct SNARKs from SNARKs with proofs of
    length $|\omega| - \Omega(\lambda)$, or $\frac{|\omega|}{1 + \epsilon} + \mathsf{poly}(\lambda, |x|)$, any constant $\epsilon > 0$.

    Our result reduces the long-standing challenge of designing fully succinct SNARKs to designing \emph{arguments of knowledge that beat the trivial construction}. It also establishes optimality of rate-1 arguments of knowledge (such as NIZKs [Gentry-Groth-
    Ishai-Peikert-Sahai-Smith; JoC'15] and BARGs [Devadas-Goyal-Kalai-Vaikuntanathan, Paneth-Pass; FOCS'22]), and suggests any further improvement is tantamount to designing fully succinct SNARKs, thus requires bypassing established black-box barriers [
    Gentry-Wichs; STOC'11].



    ## 2024/1604

    * Title: Predicting truncated multiple matrix congruential generators with unknown parameters
    * Authors: Changcun Wang, Zhaopeng Dai
    * [Permalink](https://eprint.iacr.org/2024/1604)
    * [Download](https://eprint.iacr.org/2024/1604.pdf)

    ### Abstract

    Multiple Matrix congruential generators is an important class of pseudorandom number generators. This paper studies the predictability of a class of truncated multiple matrix congruential generators with unknown parameters. Given a few truncated digits
    of high-order bits or low-order bits output by a multiple matrix congruential generator, we give a method based on lattice reduction to recover the parameters and the initial state of the generator.



    ## 2024/1605

    * Title: Nebula: Efficient read-write memory and switchboard circuits for folding schemes
    * Authors: Arasu Arun, Srinath Setty
    * [Permalink](https://eprint.iacr.org/2024/1605)
    * [Download](https://eprint.iacr.org/2024/1605.pdf)

    ### Abstract

    Folding schemes enable prover-efficient incrementally verifiable computation (IVC), where a proof is generated step-by-step, resulting in a space-efficient prover that naturally supports continuations. These attributes make them a promising choice for
    proving long-running machine executions (popularly, "zkVMs"). A major problem is designing an efficient read-write memory. Another challenge is overheads incurred by unused machine instructions when incrementally proving a program execution step.

    Nebula addresses these with new techniques that can paired with modern folding schemes. First, we introduce commitment-carrying IVC, where a proof carries an incremental commitment to the prover’s non-deterministic advice provided at different steps.
    Second, we show how this unlocks efficient read-write memory (which implies indexed lookups) with a cost-profile identical to that of non-recursive arguments. Third, we provide a new universal "switchboard" circuit construction that combines circuits of
    different instructions such that one can "turn off" uninvoked circuit elements and constraints, offering a new way to achieve pay-per-use prover costs.

    We implement a prototype of a Nebula-based zkVM for the Ethereum virtual machine (EVM). We find that Nebula’s techniques qualitatively provide a $30\times$ smaller constraint system to represent the EVM over standard memory-checking techniques, and
    lead to over $260\times$ faster proof generation for the standard ERC20 token transfer transaction.



    ## 2024/1606

    * Title: NeutronNova: Folding everything that reduces to zero-check
    * Authors: Abhiram Kothapalli, Srinath Setty
    * [Permalink](https://eprint.iacr.org/2024/1606)
    * [Download](https://eprint.iacr.org/2024/1606.pdf)

    ### Abstract

    We introduce NeutronNova, a new folding scheme for the zero-check relation: an instance-witness pair is in the zero-check relation if a corresponding multivariate polynomial evaluates to zero for all inputs over a suitable Boolean hypercube. The folding
    scheme is a two-round protocol, and it internally invokes a \emph{single} round of the sum-check protocol. The folding scheme is more efficient than prior state-of-the-art schemes and directly benefits from recent improvements to the sum-check prover.
    The prover’s work is the cost to commit to a witness and field operations in a single round of the sum-check protocol. So, if the witness contains "small" field elements, the prover only commits to "small" field elements. The verifier’s work is a
    constant number of group scalar multiplications, field operations, and hash computations. Moreover, the folding scheme generalizes to fold multiple instances at once and requires only $\log{n}$ rounds of the sum-check protocol, where $n$ is the number of
    instances folded.


    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)