[continued from previous message]
Within the provable security community, the use of TEEs as a setup assumption has converged to a standard ideal definition in the Universal Composability setting ($G_\mathsf{att}$, defined by Pass et al., Eurocrypt '17). However, it is unclear whether
any real TEE design can actually implement this, or whether the diverse capabilities of today's TEE implementations will in fact converge to a single standard. Therefore, it is necessary for cryptographers and protocol designers to specify what
assumptions are necessary for the TEE they are using to support the correctness and security of their protocol.
To this end, this paper provides a more careful treatment of trusted execution than the existing literature, focusing on the capabilities of enclaves and adversaries. Our goal is to provide meaningful patterns for comparing different classes of TEEs ,
particularly how a weaker TEE functionality can UC-emulate a stronger one given an appropriate mechanism to bridge the two. We introduce a new, ``modular'' definition of TEEsthat captures a broad range of pre-existing functionalities defined in the
literature while maintaining their high level of abstraction. While our goal is not directly to model implementations of specific commercial TEE providers, our modular definition provides a way to capture more meaningful and realistic hardware
capabilities. We provide a language to characterise TEE capabilities along the following terms:
- a set of trusted features available to the enclave;
- the set of allowed attacks for malicious interactions with the enclaves;
- the contents of attestation signatures.
We then define various possible ideal modular $G_\mathsf{att}$ functionality instantiations that capture existing variants in the literature, and provide generic constructions to implement stronger enclave functionalities from an existing setup. Finally,
we conclude the paper with a simple example of how to protect against rollback attacks given access to a trusted storage feature.
## 2024/1370
* Title: ML based Improved Differential Distinguisher with High Accuracy: Application to GIFT-128 and ASCON
* Authors: Tarun Yadav, Manoj Kumar
* [Permalink](
https://eprint.iacr.org/2024/1370)
* [Download](
https://eprint.iacr.org/2024/1370.pdf)
### Abstract
In recent years, ML based differential distinguishers have been explored and compared with the classical methods. Complexity of a key recovery attack on block ciphers is calculated using the probability of a differential distinguisher provided by
classical methods. Since theoretical computations suffice to calculate the data complexity in these cases, so there seems no restrictions on the practical availability of computational resources to attack a block cipher using classical methods. However,
ML based differential cryptanalysis is based on the machine learning model that uses encrypted data to learn its features using available compute power. This poses a restriction on the accuracy of ML distinguisher for increased number of rounds and
ciphers with large block size. Moreover, we can still construct the distinguisher but the accuracy becomes very low in such cases. In this paper, we present a new approach to construct the differential distinguisher with high accuracy using the existing
ML based distinguisher of low accuracy. This approach outperforms all existing approaches with similar objective. We demonstrate our method to construct the high accuracy ML based distinguishers for GIFT-128 and ASCON permutation. For GIFT-128, accuracy
of 7-round distinguisher is increased to 98.8% with $2^{9}$ data complexity. For ASCON, accuracy of 4-round distinguisher is increased to 99.4% with $2^{18}$ data complexity. We present the first ML based distinguisher for 8 rounds of GIFT-128 using
the differential-ML distinguisher presented in Latincrypt-2021. This distinguisher is constructed with 99.8% accuracy and $2^{18}$ data complexity.
## 2024/1371
* Title: PIGEON: A Framework for Private Inference of Neural Networks
* Authors: Christopher Harth-Kitzerow, Yongqin Wang, Rachit Rajat, Georg Carle, Murali Annavaram
* [Permalink](
https://eprint.iacr.org/2024/1371)
* [Download](
https://eprint.iacr.org/2024/1371.pdf)
### Abstract
Privacy-Preserving Machine Learning is one of the most relevant use cases for Secure Multiparty Computation (MPC). While private training of large neural networks such as VGG-16 or ResNet-50 on state-of-the-art datasets such as Imagenet is still out of
reach, given the performance overhead of MPC, private inference is starting to achieve practical runtimes. However, we show that in contrast to plaintext machine learning, the usage of GPU acceleration for both linear and nonlinear neural network layers
is actually counterproductive in PPML and leads to performance and scaling penalties. This can be observed by slow ReLU performance, high GPU memory requirements, and inefficient batch processing in state-of-the-art PPML frameworks, which hinders them
from scaling to multiple images per second inference throughput and more than eight images per batch on ImageNet.
To overcome these limitations, we propose PIGEON, an open-source framework for Private Inference of Neural Networks. PIGEON utilizes a novel ABG programming model that switches between \underline{A}rithmetic vectorization, \underline{B}itslicing, and \
underline{G}PU offloading depending on the MPC-specific computation required by each layer.
Compared to the state-of-the-art PPML framework Piranha, PIGEON achieves two orders of magnitude improvements in ReLU throughput, reduces peak GPU memory utilization by one order of magnitude, and scales better with large batch size. This translates to
one to two orders of magnitude improvements in throughput for large ImageNet batch sizes (e.g. 192) and more than 70\% saturation of a 25 Gbit/s network.
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)