Welcome to the site of the talks organised by Radboud Digital Security group. We organize a talk every Friday at 12:30.
DNSSEC provides authentication and integrity for the Domain Name System (DNS). To provide those two properties, DNSSEC relies on cryptography. The potential creation of a quantum computer that can break certain cryptographic primitives also affects DNSSEC. Therefore, it is necessary to empirically investigate if and how DNSSEC can transition to post-quantum cryptography (PQC). To that end, SIDN Labs has developed the *Post-quantum Algorithm Testing and Analysis for the DNS* (PATAD) testbed, which is available as open source software. In this presentation, we will explain more about why we developed the PATAD testbed, how the PATAD testbed works and how it can be used, and we will explain our active research in this area.
The Crossbred Algorithm is an algorithm for solving multivariate systems of nonlinear boolean polynomial equations. It is mainly used in post-quantum cryptography, with applications in cryptanalysis.
The key component is the construction and use of a Macaulay matrix of degree D. This matrix is then divided into smaller sub-matrices with whom we work further. It is estimated that for values of n >= 63 and m >= 126 we should use D >= 5. The problem there is that the Macaulay matrices of degree D >= 5 are usually too big to fit into typical operational memory, and that is where our main focus is for the optimization.
The work started by implementing the algorithm. Currently, we are working on a new method for bringing a matrix into reduced row echelon form by working with parts of the matrix in many iterations in order to reduce the operational memory usage in a single instance of time. Potentially, if successful, the method should enable the solving of bigger systems with the same typical operational memory size.
The field of verifiable secret sharing schemes was introduced by Verheul et al. and has evolved over time, including well-known examples by Feldman and Pedersen. Stinson made advancements in combinatorial design-based secret sharing schemes in 2004. Desmedt et al. introduced the concept of frameproofness in 2021, while recent research by Sehrawat et al. in 2021 focuses on LWE-based access structure hiding verifiable secret sharing with malicious-majority settings. Furthermore, Roy et al. combined the concepts of reparable threshold schemes by Stinson et al. and frameproofness by Desmedt et al. in 2023, to develop extendable tensor designs built from balanced incomplete block designs, and also presented a frameproof version of their design.
This talk explores ramp-type verifiable secret sharing schemes, and the application of hidden access structures in such cryptographic protocols. Inspired by Sehrawat et al.'s access structure hiding scheme, we develop an $\epsilon$-almost access structure hiding scheme, which is verifiable as well as frameproof. We detail how the concept $\epsilon$-almost hiding is important for incorporating ramp schemes, thus making a fundamental generalisation of this concept.
Despite significant advancements in the security of Machine Learning (ML), the resilience of ML to evasion attacks still needs further investigation in the realm of malware detection because of the complexity of threat models. Specifically, in this context, adversaries must not only deceive detectors by transforming malware programs into adversarial ones but also ensure that the adversarial malware programs can effectively compromise the victim machines. My extensive research over the past three years highlights that a critical step in strengthening ML-based malware detectors against evasion attacks is to gain a realistic picture of adversarial ML in malware detection by examining both defensive and offensive aspects based on practical threat models.
In this talk, I will delve into my PhD research, where I will review the studies that I have conducted to uncover ML vulnerabilities of malware detection against practical evasion attacks and propose effective solutions.
Throughout this talk, we present some main cryptographic problems in symmetric cryptography and highlight our primary motivations. Next, we focus on some underlying fundamental mathematical problems related to Boolean and vectorial functions and discuss some algebraic approaches and ingredients used at the core of the methodologies for designing classes of Boolean and vectorial functions (substitution boxes) concerning their Walsh transform spectrum and differential/boomerang uniformity values, respectively.
CRYSTALS-Dilithium has been selected by the NIST as the new standard for post-quantum digital signatures. In this work, we revisit the side-channel countermeasures of Dilithium in three directions. First, we improve its sensitivity analysis by classifying intermediate computations according to their physical security requirements. Second, we provide improved gadgets dedicated to Dilithium, taking advantage of recent advances in masking conversion algorithms. Third, we combine these contributions and report performance for side-channel protected Dilithium implementations. Our benchmarking results additionally put forward that the randomized version of Dilithium can lead to significantly more efficient implementations (than its deterministic version) when side-channel attacks are a concern.
Pseudonymization is a key part the of PubHubs and PEP projects developed at iHub. For PubHubs this guarantees that an end-user cannot be tracked across Hubs, because they get a pseudonym that is unique to the Hub they visit. The malleability of Elgamal is used to create this pseudonym from an encryption of the user's central identity, by performing computations on encrypted data.
A Fully Homomorphic Encryption scheme would then seem to be the obvious post-quantum replacement for Elgamal, if these schemes were not so prohibitively slow. Or are they? In this talk I'll sketch how an efficient post-quantum polymorphic pseudonymization might be achieved by exploiting the R-LWE problem and R-LWE-based leveled homomorphic encryption, that avoids the full use (and expense) of FHE.
Please see the attachment of our previous mail.
Over the past two decades research has repeatedly demonstrated the risks that shared caches pose to information confidentiality. In a typical attack, the adversary first manipulates the cache to achieve a known state and then measures changes from the known state to detect victim's activity leaking the information. Consequently, research on cache attacks typically concentrates on the known state of the cache. Adversarial works show how to achieve such known state and how to detect deviations from it, whereas defensive works propose ways for preventing the attacker from achieving a known state or from measuring deviations in the state. However, much less effort has been spent on understanding the nature of cache state and how it can be manipulated.
In this talk we shift the focus to examining the attacker's ability to manipulate unknown cache state. We use the cache state of memory locations to represent Boolean variables and demonstrate operations that allow arbitrary computation on these variables. We first design logical gates that operate directly on cache state, allowing a program to control whether memory locations are cached or not depending on whether other locations are cached. We then show that these gates are composable enough to allow arbitrary computation on cache state. Finally, we demonstrate the security implication of universal computation in the cache.
Over the last decade, applications of neural networks (NNs) have spread to various aspects of our lives. Many companies base their businesses on building products that use neural networks for tasks such as face recognition, machine translation, and self-driving cars. Much of the intellectual property underpinning these products is encoded in the exact parameters of the neural networks. Consequently, protecting these is of utmost priority to businesses. At the same time, many of these products need to operate under a strong threat model, in which the adversary has unfettered physical control of the product. In this work, we present BarraCUDA, a novel attack on general purpose Graphic Processing Units (GPUs) that can extract parameters of neural networks running on the popular Nvidia Jetson Nano device. BarraCUDA uses correlation electromagnetic analysis to recover parameters of real-world convolutional neural networks.
We usually teach students that drawing lines on elliptic curves is enough to define a group law on such curves, and we then claim that we can do Diffie-Hellman on elliptic curves. But in reality, where do these lines come from, and don't we only use the x-coordinate?
In modern cryptography, we advance in two dimensions. First, instead of working on an elliptic curve and rely on discrete logarithms for security, we want to use isogenies between elliptic curves to achieve post-quantum security. Second, instead of working on elliptic curves (genus 1), we want to work on hyperelliptic urves (genus 2 or above). But how do we get a group law on these curves, and can we still work with only the x-coordinate? And how do (higher-dimensional!!) isogenies between such objects work?
In this talk, we answer all of these questions. We explain where the lines on elliptic curves come from, and what we mean mathematically by "the x-coordinate". We can then show how we can also do post-quantum cryptography in higher dimensions. Our end goal is to do SQIsign verification in dimension 2, using only the x-coordinate.
Compressed Σ-protocols present a strengthening of the well-established Σ-protocol theory, yielding interactive proofs with low communication complexity. They show how to compress the communication complexity of basic Σ-protocols from linear down to (poly)logarithmic, using a recursive ``folding-technique'' adapted from Bulletproofs (Bootle et al., EUROCRYPT 2016, and Bünz et al., S&P 2018), at the expense of logarithmic rounds. This abstract modular theory can be instantiated from a variety of cryptographic hardness assumptions, i.e., the discrete-logarithm, strong-RSA, knowledge-of-exponent and short-integer-solution assumption.
The analysis of these and other interactive proofs requires the construction of a so-called knowledge extractor, showing that dishonest provers without knowledge of a secret witness cannot convince a verifier. However, prior proof techniques are no longer directly applicable. More precisely, the multi-round nature of these interactive proofs renders prior knowledge extractors inefficient.
In this talk, I will introduce compressed Σ-protocol theory and present several new techniques for analyzing the knowledge soundness of multi-round interactive proofs.
In this talk we discuss a novel technique to reduce the signature size of the tensor-based signature schemes, MEDS and ALTEQ, by a large factor. For both schemes, the signature size is dominated by the responses for rounds with nonzero challenges, and we reduce the signature size by reducing the size of these responses.
In both underlying Sigma-protocols, the prover generates a random transformation and sends the corresponding transformation to the verifier as the response. Instead of doing this, in our new protocols, the prover derives the transformation from randomly generated partial transformations. The prover then sends the corresponding partial transformation to the verifier as the response, so that the verifier can derive the transformation in the same way. Interestingly, it turns out that each response takes much fewer field elements to represent in this way.
We introduce n-PEP: an architecture for end-to-end secure data sharing with verifiable distributed pseudonymization. It allows users to securely share data with unlinkable user-specific (or domain-specific) pseudonyms (i.e. users do not know the other user’s pseudonym for the same entity). This enables the network to technically enforce access control and restrict data sharing, as data can only be linked via the network, and also significantly limits the impact of data leakage by users. The n-PEP architecture avoids any single points of failure: all security properties remain guaranteed as long as at least one party (of n) in the network remains uncompromised.
We base ourselves on the existing ‘PEP Responsible Data Sharing Repository’ (https://pep.cs.ru.nl/) developed at the Radboud University for several medical research projects. By describing the underlying technology as a special (i.e. asynchronous and pseudonymous) communication network (rather than a data repository), we are able to analyze the security properties of the architecture, identify weaknesses and propose improvements. Ultimately, we envision n-PEP as a building block for applications that base their security features on the principle of distributed trust.
Attackers can capture and analyze (encrypted) network traffic of their target devices, examine the existing patterns and specific metadata, such as timing and sizes of encrypted transmissions, to extract information about them. We design and implement a smartphone fingerprinting attack, a specific type of network traffic analysis, in which the phones are connected to a WiFi network and not actively in use by their owners, to explore whether it is possible to distinguish phones solely through monitoring their passive traffic. We will discuss the attack scenario and our findings so far.
Only a year ago, we witnessed a rise in the use of Large Language Models (LLMs), especially when combined with applications like chatbot assistants. Safety mechanisms and specialized training procedures are implemented to prevent improper responses from these assistants. In this work, we bypass these measures for ChatGPT and Bard (and, to some extent, Bing chat) by making them impersonate complex personas with opposite characteristics as those of the truthful assistants they are supposed to be. We start by creating elaborate biographies of these personas, which we then use in a new session with the same chatbots. Our conversation followed a role-play style to get the response the assistant was not allowed to provide. By making use of personas, we show that the response that is prohibited is actually provided, making it possible to obtain unauthorized, illegal, or harmful information. This work shows that by using adversarial personas, one can overcome safety mechanisms set out by ChatGPT and Bard. We also introduce several ways of activating such adversarial personas, altogether showing that both chatbots are vulnerable to this kind of attack. With the same principle, we introduce two defenses that push the model to interpret trustworthy personalities and make it more robust against such attacks.
A significant shortcoming in the privacy offered by 5G is the fact that the network operator must always identify each user, which the operator can link to personal information. We introduce solutions based on identity distribution to improve this and gain additional anonymity. We implement these within an existing open source 5G framework. We also discuss the trade-offs that these solutions come with, which influence the use cases for these solutions.
In cryptographic scheme design, optimization efforts traditionally prioritize computational efficiency, security robustness, and latency reduction. Side-channel attacks, which exploit indirect information leakage (like power consumption or electromagnetic emissions), have often been relegated to a niche concern, addressed separately from core design principles. This conventional approach, however, introduces significant challenges. When later adapted for side-channel resilience, cryptographic schemes frequently incur substantial performance penalties, undermining their initial optimization gains.
Recognizing this disconnect, our work advocates for a paradigm shift: integrating side-channel considerations into the foundational design stage of cryptographic schemes. This approach necessitates a comprehensive framework that abstracts side-channel threats. Our presentation will discuss a framework that achieves this integration, narrowing the gap between cryptographic efficiency and side-channel security.
We provide security foundations for SAFE, a recently introduced API framework for sponge-based hash functions tailored to prime-field-based protocols. SAFE aims to provide a robust and foolproof interface, has been implemented in the Neptune hash framework and some zero-knowledge proof projects, but currently lacks any security proof. Our results pave the way of using SAFE with the full taxonomy of hash functions, including SNARK-, lattice-, and x86-friendly hashes.
In recent years, deep learning-based side-channel analysis (DLSCA) has become an active research topic within the side-channel analysis community. The well-known challenge of hyperparameter tuning in DLSCA encouraged the community to use methods that reduce the effort required to identify an optimal model. One of the successful methods is ensemble learning. While ensemble methods have demonstrated their effectiveness in DLSCA, particularly with AES-based datasets, their efficacy in analyzing symmetric-key cryptographic primitives with different operational mechanics remains unexplored.
Ascon was recently announced as the winner of the NIST lightweight cryptography competition. This will lead to broader use of Ascon and a crucial requirement for thorough side-channel analysis of its implementations. With these two considerations in view, we utilize an ensemble of deep neural networks to attack two implementations of Ascon. Using an ensemble of five multilayer perceptrons or convolutional neural networks, we could find the secret key for the Ascon-protected implementation with less than 3 000 traces. To the best of our knowledge, this is the best currently known result. We can also identify the correct key with less than 100 traces for the unprotected implementation of Ascon, which is on par with the state-of-the-art results.
Most public-key cryptography that is deployed in today’s systems is susceptible to attacks by quantum computers. With increasing investment in the development of large-scale quantum computers, it is important to develop cryptography that is secure against both classical and quantum attacks. Considering this, in 2016, NIST began an effort to standardise post-quantum secure key exchange mechanisms and signature schemes. In this talk, we will focus on signature schemes, and introduce SQIsign, the only isogeny-based signature scheme that was submitted to NISTs recent alternate call for signatures and boasts the smallest combined signature and public key sizes. We will discuss the benefits and drawbacks of SQIsign compared to other post-quantum secure signatures, and present joint work with Jonathan Komada Eriksen, Michael Meyer, and Krijn Reijnders, that obtains a 2.65 to 4.40 factor speed-up in SQIsign verification.
Secure computation technologies allow a meaningful processing of encrypted data to be performed, without requiring this data to be first decrypted. Nowadays, all computations that can be performed on plaintext data, can be also performed by means of secure computation, or at least be approximated, without compromising data privacy. This opens up a whole world of applications, such as collaborative analytics, with data shared by different organizations, that were prohibited in the past, due to privacy concerns. This talk will introduce the two main secure computation technologies, namely Multiparty Computation and Homomorphic Encryption.
Fuzzing is a widely used testing technique that aims to find vulnerabilities in the code. Although (stateless) fuzzing has been widely explored, stateful fuzzing is still relatively new and thus gives room for further research and investigation. In this talk, we will introduce the simple (but effective) idea behind fuzzing, and then dive into the challenges of stateful fuzzing. Finally, we will introduce AFL*, a novel stateful fuzzer that outperforms the state-of-the-art fuzzer performance by 3 orders of magnitude while still achieving better state and code coverage.
Modern distributed learning systems have enabled collaborative training for a shared global model across different data owners without sharing local real data. Such techniques not only facilitate privacy-preserving learning, but also empower resources from the edge. However, building trustworthy distributed machine learning still requires a long way to go. In this talk I will share malicious behaviors and defense mechanisms based on our papers, during both training and inference phases of distributed learning, launched by servers or clients, resulting in either privacy leakage or security risk. These adversarial parties emphasize the urgent need for us to reconsider and design distributed learning systems that are reliable, trustworthy, and robust.
Physically Unclonable Functions (PUFs) are specialized circuits that leverage the intrinsic circuit-level variabilities underlying a device to realize a random instance-specific function in hardware. This makes it an excellent candidate for hardware root-of-trust in security applications such as device fingerprinting, authentication protocols, and key generation. Besides enormous research efforts in designing PUFs, their vulnerabilities are still being exploited using machine learning (ML) based model-building attacks. Due to inherent complicacy in exploring and manually converging to a strong PUF composition, the challenge of building ML-attack-resistant PUFs continues.
In this talk, we briefly discuss the challenges in developing robust silicon PUF constructions. We proceed to discuss an automated framework proposed to formally assess the learnability of different PUF constructions and compositions and guide the designer in exploring resilient PUFs. The proposed framework represents PUF constructions formally and evaluates the learnability of their compositions in the pre-silicon phase. This tool targets to help PUF designers in making informed design choices and strengthen PUF constructions from the architectural level.
After briefly introducing ongoing works by my students, we will focus on a recently accepted paper. In today's digital age, home routers are a cornerstone of our Internet experience, quietly operating in the background with settings we rarely adjust. In this talk, we reveal a cautionary tale: default settings, often left unchanged, could be opening doors to security and privacy risks. We examined popular home routers across 14 brands, and built a test suite for a range of features. This process led to the discovery of dozens of vulnerabilities, including weak Wi-Fi security to trivial admin passwords and unencrypted update processes. Although our research is orthogonal to recent studies focusing on router firmware analysis, it underscores the importance of taking a broader look on a problem, one that can also provide meaningful insights.
Fault injection attacks have caused implementations to behave unexpectedly, resulting in a spectacular bypass of security features and even the extraction of cryptographic keys. Clearly, developers want to ensure the robustness of the software against faults and eliminate production weaknesses that could lead to exploitation. Several fault simulators have been released that promise cost-effective evaluations against fault attacks. In this talk, we set out to discover how suitable such tools are, for a developer who wishes to create robust software against fault attacks.
This talk describes a buffer overflow vulnerability in the SHA-3 implementation submitted to NIST, which remained undetected for well over a decade. The vulnerability affects several widely-used software projects that have integrated variants of this code, including the Python and PHP scripting languages. It allows attacker-controlled values to be XORed into memory, thereby making many standard protection measures against buffer overflows (e.g., canary values) completely ineffective. A proof-of-concept that allows arbitrary code execution is provided.
In this talk I will explain at a high level how a public key cryptosystem is made from hard lattice problems. Essentially encryption accounts to a 'random disturbance' of a lattice point and decryption accounts to 'rounding back to the lattice point'.
Then, the notion of ideal lattices and module lattices ('structured' lattices) are introduced. Results of attacks (and reductions) on those types lattices and their impact, are briefly treated. If time left, I will go more in detail on the attack on ideal lattices.
Implementing post-quantum cryptographic schemes, specifically CRYSTALS-Dilithium and CRYSTALS-Kyber, on resource-constrained devices presents challenges, primarily due to the high computational cost of polynomial operations. This talk introduces PQ.V.ALU.E, a hardware/software co-design strategy utilizing RISC-V extensions. We propose a lightweight custom ALU tailored for NTT computations and extend the RISC-V ISA with ten new instructions. We show how PQ.V.ALU.E offers a practical solution for PQC implementations.
Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the sidechannel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains. At the same time, the works in the sidechannel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these techniques’ effectiveness. In this paper, we aim to investigate the regularization effectiveness on a randomly selected model, by applying four powerful and easy-to-use regularization techniques to eight combinations of datasets, leakage models, and deep learning topologies. The investigated techniques are L1, L2, dropout, and early stopping. Our results show that while all these techniques can improve performance in many cases, L1 and L2 are the most effective. Finally, if training time matters, early stopping is the best technique.
With the conclusion of the third round for standardizing Post-Quantum cryptography NIST anounced an additional fourth round for digital signature schemes. Their main motivation is to have more diversity in hardness assumptions and not rely solely on structured lattices. This call led to an increased interest in code-based and multivariate crypto and their cryptanalysis.
In current state of the art algorithms, algebraic methods often outperform combinatorial methods in high parameter ranges. Two examples of hardness assumptions for which this is the case are Matrix Code Equivalence (MCE), on which the signature scheme MEDS is based, and Alternating Trilinear Form Equivalence (ATFE).
In this talk we explore the goals and obstacles encountered in algebraic cryptanalysis using these two guiding examples.
The field of post-quantum cryptography encompasses the efforts towards developing alternatives for protecting our data and communications from adversaries with a large quantum computer. This brief introduction begins by exploring the timeline of advancements in post-quantum cryptography and the ongoing standardization process. We then provide an overview of the various classes of post-quantum primitives, such as lattice-based, code-based, multivariate, hash-based, and isogeny-based schemes. The main objective is to gain an intuitive understanding of the underlying hard problems associated with each class, as well as the respective strengths and challenges.
On 7 February 2023, NIST announced that Ascon has been selected as the winner of their lightweight cryptography standardisation process (generally referred to as the NIST LWC competition). It is expected to be the basis for securing tiny tech such as pacemakers, RFID tags, and DRMed printer cartridges. As Ascon is not a single algorithm, but a family of modes of operation and parameters built on top of the Ascon-p permutation, there are several open questions around current as well as (potential) future standardisation.
This talk discusses how the competition has progressed up to now, including a short illustrated history of Ascon's modes and parameters. Additionally, it summarises current debates through the lens of different stakeholder. Looking forward, potential futures of permutation-based cryptography are sketched, both in concrete terms as well as tied to overarching principles. Finally, some lessons from the usable security field are emphasised.
This talk will cover the basics of IT for SCA in an educational manner and how this has impacted the field of hardware security. It will serve as an introduction or refresher and will follow up with a discussion on the evolution of the field.
In today's world, traditional security measures such as passwords and PINs are becoming increasingly vulnerable to cyber threats. Biometric verification systems, on the other hand, offer a promising solution for enhancing security and ensuring privacy. By specifically characterising and identifying individual human properties, biometric verification systems can provide an additional level of identity verification in a ubiquitous online environment.
This presentation explores the various types of biometric systems, including facial recognition, fingerprint scanning, iris recognition, signature and voice recognition, and their applications in different settings such as banking, healthcare, and government. We will also examine the benefits and limitations of these systems. The implementation aspects of specific realised systems will be presented. The aspect of the inclusion of cryptographic aspects in the security of the overall system will be discussed.
Deep neural networks (DNNs) represent a powerful technique for assessing cryptographic security concerning side-channel analysis (SCA) due to their ability to aggregate leakages automatically, rendering attacks more efficient without preprocessing. Nevertheless, despite their effectiveness, DNNs employed in SCA are predominantly black-box algorithms, posing considerable interpretability challenges. Indeed, the designer would want to know more about the leakage sources leading to key recovery to help improve the security of the cryptographic implementation.
In this talk, we presents a novel technique called Key Guessing Occlusion (KGO) that acquires a minimal set of sample points required by the DNN for key recovery. We denote this set of sample points as DeepPoIs. These DeepPoIs provide information on which areas of the traces are important to the DNN for retrieving the key, enabling evaluators to know where to refine their cryptographic implementation.
Although Encrypt-then-MAC (EtM) is a popular mode for authenticated encryption (AE), almost all designs following this paradigm (including the AE suites for TLS) are unfortunately vulnerable against nonce misuse: a single repetition of the nonce value reveals the hash key, leading to a universal forgery attack. There are only two authenticated encryption schemes following the EtM paradigm that can resist nonce misuse attacks – the GCM-RUP (CRYPTO-17) and the GCM/2+ (INSCRYPT-12). However, they are secure only up to the birthday bound in the nonce respecting setting, resulting in a restriction on the data limit for a single key.
This paper introduces nEHtM, which is a nonce-based variant of EHtM (FSE-10) constructed using a block cipher, and which has a beyond the birthday bound (BBB) unforgeable security that gracefully degrades under nonce misuse. It also combines nEHtM with the CENC (FSE-06) mode of encryption using the EtM paradigm to realize a nonce-based AE, CWC+. CWC+ is very close to the CWC AE scheme (FSE-04) (requiring only a few more XOR operations); not only does it provide BBB security, but also gracefully degrading security on nonce misuse.
The consideration of ethical aspects is (relatively) new to CS research. After all, there do not seem to be ethical issues with typical CS fields such as model checking, theorem proving, or algorithmics - ethical issues may arise in their application, sure, but in the underlying foundations?
Regardless of the validity of that point of view, the proliferation of mobile internet connectivity, coupled with the advent of social media, made this untenable. Much CS research nowadays touches upon internet, social media, apss, human interaction, etc. This requires us to start considering ethics more seriously.
In this talk, I first showcase some (well-known) morally questionable cases where CS played a role, such as Volkswagen's Dieselgate. Then I will present some basic handholds to start considering ethics for your research, drawing from the Menlow report. We briefly zoom in on deception research and ethical aspects thereof. Lastly, there may or may not be homework!
The presentation proposes a novel method for brute forcing private RSA keys. While traditional brute forcing algorithms scale just with processor performance, the new method also scales with available memory size. In this way, we can get a performance of 50 bits per second on standard hardware. The approach leverages existing research and combines well with implementation attacks like Side Channel Analysis, making such attacks more realistic. The method is very suitable to complete a partial attack, where the unknown key bits are randomly distributed over the private exponent.
Nowadays citizens often use mobile solutions to authenticate towards online services. In 2015 Strong Customer Authentication (SCA) became a requirement for businesses processing online payments in Europe, part of the PSD2 (Revised Payment Services) directive. The goal of SCA is to protect the customers by reducing the risk of fraud. Consequently, customers can be asked to provide some level of authentication by providing two of the three following factors: Something they know (such as a PIN), something they own (such as a mobile phone), and something they are (such as biometrics).
In 2014, the EU established the eIDAS (electronic Identification, Authentication and trust Services) regulation to regulate electronic identification and trust services in the European market. eIDAS provides three levels of assurance (LoA): low, substantial, high, referring to the degree of confidence in the claimed identity of a person, whereby the factors mentioned above do play a role. In recent years, within the EU different authentication solutions were developed that are officially recognized within the EU, thereby providing some level of assurance.
In this work, we analyze and compare some mobile authentication solutions that are recognized within the EU, in particular, DigiD, IRMA, itsme, Smart-ID, and the AusweisApp2.
Recent advances have significantly improved fuzz testing, uncovering many vulnerabilities in various software systems. However, certain types of software, such as network protocols, may still present challenges for efficient fuzz testing.
This article presents two enhancements that allow efficient fuzzing of any network protocol implemented using POSIX network I/O. The first is the Desock+ library, which simulates a network socket and supports different POSIX options to make it suitable for network protocol fuzzing. The second is Green-Fuzzer, which sends input messages in one go and reduces the system-call overhead while fuzzing network protocols. Our evaluation shows that these enhancements make our fuzzer 5 to 6 times faster than AFLNet.
Bent functions are a class of Boolean functions with many interesting applications for the design of symmetric ciphers, since they reach the highest possible nonlinearity. The literature features a multitude of different constructions, and nowadays introducing a new one requires careful examination of the functions it generates, to assess whether they are equivalent to previously known classes. This talk is about one such new construction of bent functions, recently published in the journal Designs, Codes and Cryptography. The paper describes the construction in terms of subspaces generated by linear recurring sequences, which form a partial spread.
In this talk, instead, we give a `behind the scenes' story that shows how we arrived to this simple description. We illustrate the original idea of using a construction of Mutually Orthogonal Latin Squares (MOLS) based on Cellular Automata (CA) to define a Hadamard matrix with the structure associated to a bent function. Next, we explain the connection with linear recurring sequences and partial spreads. We conclude with some encouraging results on the ranks distribution of the bent functions produced by our construction, which indicate that many of them are neither in the Maiorana-McFarland class nor in the Desarguesian partial spread.
Ascon is a family of cryptographic primitives for authenticated encryption and hashing introduced in 2015. It is selected as one of the ten finalists in the NIST Lightweight Cryptography competition. Since its introduction, Ascon has been extensively cryptanalyzed, and the results of these analyses can indicate the good resistance of this family of cryptographic primitives against known attacks, like differential and linear cryptanalysis.
Proving upper bounds for the differential probability of differential trails and for the squared correlation of linear trails is a standard requirement to evaluate the security of cryptographic primitives. It can be done analytically for some primitives like AES. For other primitives, computer assistance is required to prove strong upper bounds for differential and linear trails. Computer-aided tools can be classified into two categories: tools based on general-purpose solvers and dedicated tools. General-purpose solvers such as SAT and MILP are widely used to prove these bounds, however they seem to have lower capabilities and thus yield less powerful bounds compared to dedicated tools.
In this work, we present a dedicated tool for trail search in Ascon. We arrange 2-round trails in a tree and traverse this tree in an efficient way using a number of new techniques we introduce. Then we extend these trails to more rounds, where we also use the tree traversal technique to do it efficiently. This allows us to scan much larger spaces of trails faster than the previous methods using general-purpose solvers. As a result, we prove tight bounds for 3-rounds linear trails, and for both differential and linear trails, we improve the existing upper bounds for other number of rounds. In particular, for the first time, we prove bounds beyond 2−128 for 6 rounds and beyond 2−256 for 12 rounds of both differential and linear trails.
To what extent are your children being tracked while browsing websites?
Children may not understand the potential negative consequences of disclosing personal information online; that is why laws such as COPPA and GDPR have been implemented to protect children's privacy by regulating data collection and usage on the Internet. However, further research and monitoring are needed to ensure that these laws are being followed and that children's privacy is protected. Online tracking on websites has become a widespread technique to derive information about the preferences and behavior of users for different purposes, including behavioral advertising, personalization, etc.
This study investigates to what extent third-party trackers collect data on children’s websites and violate children's privacy. To this end, we first trained a multilingual classifier to detect potential children’s websites based on HTML meta tags, such as titles and descriptions. In doing so, we also extend DuckDuckGo’s Tracker Radar Collector to crawl websites and measure data exfiltration. Then we analyzed the impact of location, user consent showing in cookie consent settings, and mobile vs. desktop browsing. Our findings revealed that children are tracked on the majority of the websites. Additionally, we discovered that only a third of websites in the EU and even fewer in the US are requesting consent. Moreover, when consent is rejected, users are still typically tracked. We also observed that advertisements were present on a third of the children's websites, and these advertisements were primarily targeted.
Attack graphs (AG) are insightful models of attacker strategies that show the paths followed by attackers to penetrate a network. Existing work on AG generation requires expensive expert knowledge and published vulnerability reports, which do not exist yet for unreported vulnerabilities. Meanwhile, there exists an abundance of intrusion alerts from prior security incidents that analysts struggle to investigate.
In this talk, I introduce Alert-driven Attack Graphs (AGs) that are learned purely from intrusion alerts, without a priori expert knowledge. The method exploits the temporal and probabilistic dependence between alerts in a Suffix-based Probabilistic Deterministic Finite Automaton (S-PDFA) — a model that brings infrequent severe alerts into the spotlight and summarizes paths to contextually-similar severe alerts. Then, AGs are extracted from the model on a per-objective, per-victim basis. The AGs are succinct, interpretable and provide actionable intelligence about attack progression, such as strategic differences and overlapping attack paths. They even show that attackers tend to follow shorter paths after they have discovered a longer one.
Neural networks have become popular because of their versatility and state-of-the-art results in many applications such as image classification, natural language processing, speech recognition, forecasting, etc. These applications are also used in resource constrained environments such as embedded devices.
In this talk, the susceptibility of neural network implementations to reverse engineering is explored on the NVIDIA Jetson Nano microcomputer via side-channel analysis.
To this end, an architecture extraction attack is presented.
In the attack, 15 popular convolutional neural network architectures (EfficientNets, MobileNets, NasNet, etc.) are implemented on the GPU of Jetson Nano and the electromagnetic radiation of the GPU is analyzed during the inference operation of the neural networks.
The results of the analysis show that neural network implementations are highly vulnerable to reverse engineering.
Neural networks provide state-of-the-art results in many domains. Yet, they often require high energy and time-consuming training processes. Therefore, the research community is exploring alter-native, energy-efficient approaches like spiking neural networks (SNNs). SNNs mimic brain neurons by encoding data into sparse spikes, resulting in energy-efficient computing. To exploit the properties of the SNNs, they can be trained with neuromorphic datasets that capture the differences in motion. SNNs, just like any neural network model, can be susceptible to security threats that make the model perform anomalously. One of the most crucial threats is the backdoor attacks that modify the training set to inject a trigger in some samples. After training, the neural network will perform correctly on the main task. However, under the presence of the trigger (backdoor) on an input sample, the attacker can control its behavior.
The existing works on backdoor attacks consider standard datasets and not neuromorphic ones. In this paper, to the best of our knowledge, we present the first backdoor attacks on neuromorphic datasets. We then evaluate the performance of our backdoor using spiking neural networks, achieving top accuracy on both main and backdoor tasks, up to 99%.
First of all we take a thorough look at an error in a paper by Daemen et al. (ToSC 2018) which looks at minimal requirements for tree-based hashing based on multiple primitives, including block ciphers. This reveals that the error is more fundamental than previously shown by Gunsing et al. (ToSC 2020), which is mainly interested in its effect on the security bounds. It turns out that the cause for the error is due to an essential oversight in the interaction between the different oracles used in the indifferentiability proofs. In essence, it reduces the claim from the normal indifferentiability setting to the weaker sequential indifferentiability one. As a matter of fact, this error appeared in multiple earlier indifferentiability papers, including the optimal indifferentiability of the sum of permutations (EUROCRYPT 2018) and the recent ABR+ construction (EUROCRYPT 2021). We discuss in detail how this oversight is caused and how it can be avoided.
We next demonstrate how the negative effects on the security bound of the construction by Daemen et al. can be resolved. Instead of only allowing a truncated output, we generalize the construction to allow for any finalization function and investigate the security of this for different types of finalization. Our findings, among others, show that the modern BLAKE3 construction is secure in principle but that its use of the extendable output requires its counter used for random access to be public.
Twitter offers so-called 'verified user badges' to accounts of public interest. Yet, this authenticity marker seems often to be confused with credibility. As an alternative to these badges, iHub's iLab created Twid. Namely, Twid offers users to sign their tweets with aspects of their identity through the IRMA application (e.g., their diploma or nationality). These signatures can then be (re-)viewed by other users. In the light of online misinformation, this study explores how 'verified user badges' and Twid's digital signatures affect credibility, and consequently sharing intentions.
Similar digital signatures can also be applied to PostGuard, another iLab project. The PostGuard mail add-on currently enables users to encrypt their emails and specify which users can decrypt them (again using IRMA attributes), offering security. Soon, Postguard also aims to address integrity and authenticity issues of email (e.g., identity fraud or phishing) through digital signatures. These signatures can also take the form of IRMA attributes to verify (parts of) the author's identity more intuitively. Yet, user-centred research is required to understand how to effectively integrate digital signatures in PostGuard. What does such research look like?
We'll close off with a discussion about last summer's spear phishing attempt in iCIS.
In this talk, I will be presenting the Cube Key-Recovery Attack on SHA3-like designs. SHA3-like designs include all of the variants that are derived from Keccak Sponge Function and Xoodoo structures such as Keccak-MAC, Ketje Jr, Xoodyak, and so on. I will start by describing the general cube attack and then present 2 types of cube attacks - the Cube-Attack-Like and Conditional-Cube-Attack - which are derived from the main version. I will also go over the complexity of each type of attack on different variants of SHA3-like designs.
Blockchains based on proof-of-work suffer from serious drawbacks, such as high computational overhead, long confirmation time, and forks. Committee-based blockchains provide an alternative that tackles these problems. These block-chains use a committee to approve a block at each height. However, rewarding the committee for their work is challenging. The reward mechanism must be fair and robust to attacks.
In this paper, we study leader-based reward mechanisms in committee-based blockchains in the presence of rational, colluding, and Byzantine committee members. First, we study the incentives of committee members to deviate and show that an existing reward mechanism is susceptible to attacks from both colluding and Byzantine members. We then propose a reputation-based leader selection mechanism that provides sufficient incentives to coerce rational members to abide by the protocol, and significantly limits the possible gains of collusion. Additionally, our approach reduces the ability of Byzantine members to perform targeted attacks.
After WW-II the Dutch Government wanted to have its own secure cryptographic communications application when using the telex network. Telex in that time can be seen as electronically connected typewriters, or the for-for-for runner of electronic mail.
There were some fundamental reasons for securing telex messages than. Some people in the Dutch government found out during WW-II that even trusted allies were reading their “secured” messages or otherwise collecting these for later analysing.
What did the Government use for encryption directly after WW-II and why? How did they organize the process? Who was the founding father and which organization or even better who was the driving force in this state secret application environment? The founding father was long connected to PTT and the research laboratories as well as to the TU-Delft.
In short: What were the factors behind the rise and fall of the crypto-developments and industry in the Netherlands between 1945 and 1965?
Behind this story: Three years research evaluating and structuring 185 public archive boxes and ±1000 State Secret document pages obtained (finally) from secret government archives.
PostGuard is a new secure email system utilizing modern identity-based encryption (IBE) and attribute-based credentials (ABCs) primitives to build upon the authentication mental model. Existing email systems often base their cryptography on public-key encryption, which requires the sender to authenticate the recipient, and only allow the recipient to be authenticated through a single attribute related to her identity, e.g., her email address.
Instead, PostGuard allows the sender to enforce fine-grained access control on the encrypted message by requiring that a recipient satisfies a policy. For authentication, we use the IRMA identity platform, which stores ABCs decentrally within the IRMA mobile app. Furthermore, PostGuard supports this functionality in a privacy-friendly manner: the ciphertexts cryptographically hide possibly sensitive data about the identity of the user. PostGuard’s libraries and protocols are fully open source and modular which makes them auditable and reusable. By providing end-user add-ons for two widely used email clients, Thunderbird and Outlook, we demonstrate this technology can integrate in existing email workflow.
During this lunch talk, I will present the work done with Joan Daemen, Christoph Dobraunig, Santosh Gosh, Shahram Rasoolzadeh on designing a low-latency tweakble block cipher: BipBip.
Recently, a memory safety concept called Cryptographic Capability Computing (C^3) has been proposed. C^3 is the first memory safety mechanism that works without requiring extra storage for metadata and hence, has the potential to significantly enhance the security of modern IT-systems at a rather low cost. To achieve this, C^3 heavily relies on ultra-low-latency cryptographic primitives. However, the most crucial primitive required by C^3 demands uncommon dimensions. To partially encrypt 64-bit pointers, a 24-bit tweakable block cipher with a 40-bit tweak is needed.
The research on low-latency tweakable blockciphers with such small dimensions is not very mature. Therefore, designing such a cipher provides a great research challenge, which we take on with this paper. As a result, we present BipBip, a 24-bit tweakable block cipher with a 40-bit tweak that allows for ASIC implementations with a latency of 3 cycles at a 4.5 GHz clock frequency on a modern 10 nm CMOS technology.
Finding a solution to a set of polynomial equations is a fundamental problem in cryptography. The general mathematical tool for solving this problem is based on the notion of Gröbner bases. In this talk I will introduce the basic concepts of this theory, and discuss cases where it might prove an efficient tool (and some cases where it is probably not). If time, I will also present some open problems involving the use of Gröbner bases in cryptography.
With the growing popularity of cryptocurrencies, interest in digital signatures is also on the rise. Researchers have been attempting since the 90’s to enhance known signature schemes with new properties useful in specific cases.
One such property of threshold schemes is non-interactivity: allowing any subset of a group of people to generate a signature without having to interact. A solution to the quest for non-interactivity is to divide the signature into two steps: the 'presigning' condenses most communication rounds and can be performed long before the signature is needed, while the 'signing' only takes a single round and happens after the message is chosen. However, most protocols require that the subset of the signers be fixed before presigning, since they are the only ones who participate in it.
We present a non-interactive threshold ECDSA protocol that foregoes this assumption, removing the need for the signers to organize and therefore interact before signing. To evaluate the performance of our protocol, it has been implemented in RUST and benchmarked.
Fuzzing is a security testing methodology effective in finding bugs.
In a nutshell, a fuzzer sends multiple slightly malformed messages to the software under test, hoping for crashes or weird system behaviour.
The methodology is relatively simple, although applications that keep internal states are challenging to fuzz.
The scientific community has responded to this challenge by developing fuzzers tailored to stateful systems, but a clear understanding of the variety of strategies is still missing.
In this talk, we present the first taxonomy of fuzzers for stateful systems and provide a systematic comparison and classification of these fuzzers.
Many are familiar with banking online, where in order to access your account, some sort of authentication beyond a simple password may be required. One common additional form of authentication is to enter a subset of randomly chosen positions from a personal identification number (PIN). Different banks and organisations may vary the number of digits requested.
Despite this method being phased out by some banks it is still used in conjunction with other methods of verification. We wanted to explore how much harder is it to guess a PIN via its partial PIN as opposed to guessing a full PIN.
An in-the-wild example could look like the following scenario: you attempt to guess your friends bank password via her phone. You have a certain amount of guesses a day before being locked out. How many guesses do you need to recover the full PIN? If you correctly guess one partial PIN, you now have these position numbers and so your chances of guessing the full PIN increases on guessing again.
In order to answer these questions, we created four different guessing strategies and compared their guessing efficacy. We simulate the guessing process on different length PINs with different length partial PINs and graphed the data to compare them visually, noting any patterns.
In this talk we present a summary of the first edition of the workshop GHTC that was held on August 13, 2022 in Santa Barbara California. The first edition of GHTC was devoted to the general concept of Cryptographic Agility, whose different and sometimes intricated aspects were discussed at length by twelve famous cryptographers.
Francisco Rodríguez-Henríquez received his PhD in electrical and computer engineering from Oregon State University in June, 2000. From July 2000 to May 2002, Dr. Rodríguez-Henríquez was collaborating as a security architect for IT companies in Germany and the U.S. Currently, he is a professor (CINVESTAV-3D Researcher) in the Computer Science Department of the Advanced Research Center (CINVESTAV-IPN) at Mexico City, which he joined in May 2002. Since May 2021, Dr. Rodríguez-Henríquez is Technical Director in the Cryptography Research Centre of the Technology Innovation Institute at Abu Dhabi, UAE.
Dr. Rodríguez-Henríquez research interests lie in cryptography, computer arithmetic and information security
Payment channel networks (PCNs) enhance the scalability of blockchains by allowing parties to conduct transactions off-chain, i.e, without broadcasting every transaction to all blockchain participants. To conduct transactions, a sender and a receiver can either establish a direct payment channel with a funding blockchain transaction or leverage existing channels in a multi-hop payment. The security of PCNs usually relies on the synchrony of the underlying blockchain, i.e., evidence of misbehavior needs to be published on the blockchain within a time limit. Alternative payment channel proposals that do not require blockchain synchrony rely on quorum certificates and use a committee to register the transactions of a channel. However, these proposals do not support multi-hop payments, a limitation we aim to overcome.
In this paper, we demonstrate that it is in fact impossible to design a multi-hop payment protocol with both network asynchrony and faulty channels, i.e., channels that may not correctly follow the protocol. We then detail two committee-based multi-hop payment protocols that respectively assume synchronous communications and possibly faulty channels, or asynchronous communication and correct channels. The first protocol relies on possibly faulty committees instead of the blockchain to resolve channel disputes, and enforces privacy properties within a synchronous network. The second one relies on committees that contain at most f faulty members out of 3f+1 and successively delegate to each other the role of eventually completing a multi-hop payment. We show that both protocols satisfy the security requirements of a multi-hop payment and compare their communication complexity and latency.
The Digital Security Group found flaws in the second generation of mobile networks. Today we face the rollout of the 5th generation of mobile networks. Still, we see a lot of security issues in the 5th generation. This talk will give an overview of what to expect from the 5th generation security and why it is still so fruitful to do security research.
David Rupprecht is a postdoctoral researcher at the Ruhr University Bochum (RUB) and guest research at RU. Before this, he completed his Ph.D. at the RUB on the topic: "Enhancing the security of 4G and 5G mobile networks on protocol layer two". His research interests include the technical aspects of secure mobile (5G) networks. In his daily work, he uses software-defined radios to implement and evaluate attacks and countermeasures on mobile networks.
I'll give an introduction to fault attacks from a symmetric cryptographer's perspective, briefly covering some different attack variants (DFA, IFA, SFA, SIFA) and countermeasures.
Then we'll have a closer look at the DEFAULT block cipher, an interesting recent design published by Baksi et al. at ASIACRYPT 2021 with the goal of providing inherent cipher-level DFA resistance. I'll
show why the design doesn't prevent DFA as intended, based on our paper: Marcel Nageler, Christoph Dobraunig, Maria Eichlseder: "Information-combining differential fault attacks on DEFAULT". EUROCRYPT
2022
Maria Eichlseder is assistant professor of Cryptography at Graz University of Technology, Austria. Her research interests include the cryptanalysis, design, and implementation security of symmetric cryptographic algorithms, such as hash functions and authenticated encryption algorithms and their underlying primitives. She co-designed Ascon, a lightweight authenticated cipher that is among the winners of the CAESAR competition, and ISAP; both are now finalists in the NIST LWC competition.
One of the projects I worked on during my internship at NXP is engineering a low-memory implementation for the Dilithium signature scheme. We submitted the work to AfricaCrypt and it got accepted! :D
This presentation is going to be the try-out for our presentation in Fes.
Dilithium has traditionally had a reputation that it needs a lot of RAM. In the past, most Dilithium implementations have used 50 -- 90 KiB of RAM. However, there are al lot of small-chip families, like the LPC800 (NXP), STF32F0 (STMicroelectronics), or the XMC1000 (Infineon) families that often don't even have more than 16 KiB of RAM. On these platforms, Dilithium just does not fit in the device SRAM.
In this work we were interested in seeing how little RAM Dilithium could *really* need in practice. We were hoping to see whether we would be able to fit the whole algorithm into 8 KiB of RAM.
We devised a number of tricks to compress parts of the algorithm, and I am going to highlight some of them in my presentation. We implemented the whole thing in C, and benchmarked the memory usage and the performance on Cortex-M4. For Dilithium2 and Dilithium3, we managed to get the memory usage under 8 KiB; trading the low memory footprint for a slowdown factor of 2.3 -- 2.8 for signing, and a slowdown factor of 1.8 -- 2.0 for verification, compared to PQClean.
Profiling side-channel analysis allows evaluators to estimate the worst-case security of a target. When security evaluations relax the assumptions about the adversary's knowledge, profiling models may easily be sub-optimal due to the inability to extract the most informative points of interest from the side-channel measurements.
When used for profiling attacks, deep neural networks can learn strong models without feature selection with the drawback of expensive hyperparameter tuning. Unfortunately, due to very large search spaces, one usually finds very different model behaviors, and a widespread situation is to face overfitting with typically poor generalization capacity.
Usually, overfitting or poor generalization would be mitigated by adding more measurements to the profiling phase to reduce estimation errors.
This paper provides a detailed analysis of different deep learning model behaviors and shows that adding more profiling traces as a single solution does not necessarily help improve generalization.
We recognize the main problem to be the sub-optimal selection of hyperparameters, which is then difficult to resolve by simply adding more measurements. Instead, we propose to use small hyperparameter tweaks or regularization as techniques to resolve the problem.
Despite the admirable efforts of previous studies in enhancing the robustness of machine learning-based malware detectors, their practicality in providing adversarial robustness against realistic threat models is arguable because they did not consider realizable adversarial examples or low computational complexity. In this talk, I will present a promising way to improve the adversarial robustness of Android malware detection based on interpreting the domain constraints in the feature space. Specifically, I will propose a novel approach for extracting feature-space domain constraints by learning meaningful feature dependencies from data, and applying them based on a novel robust feature space. Moreover, I will show how using the learned domain constraints in conventional adversarial retraining can strengthen the robustness of Android malware detection against realizable adversarial examples.
The construction of coprime polynomials over finite fields has several applications in cryptography, coding theory and combinatorics. In the binary field GF(2), Benjamin and Bennett showed a simple way to count and enumerate coprime polynomial pairs based on what they call DilcuE’s algorithm (i.e., Euclid’s algorithm ran backwards).
In this talk, we consider a specific case of the above problem, namely when both polynomials have the same degree and a nonzero constant term, which is motivated by the design of secret sharing schemes via cellular automata. Interestingly, this variant turns out to be more complicated, requiring different approaches ranging from formal languages to combinatorics to solve different aspects of the problem. In particular, we decompose our problem in two parts. First, we show that the sequences of constant terms for the quotients visited by DilcuE's algorithm can be modeled as words of a regular language recognized by a simple finite state automaton. This allows us in turn to count all such sequences by leveraging on classic results from algebraic language theory. On the other hand, we show that the sequences of degrees of quotients in DilcuE's algorithms are equivalent to compositions of natural numbers, which have a simple description in terms of partially ordered sets. Putting these two results together, we finally devise a combinatorial algorithm that enumerates all pairs of coprime polynomials with nonzero constant term of given degree.
Side-channel leakage simulators allow testing the resilience of cryptographic implementations to power side-channel attacks without a dedicated setup. The main challenge in their large-scale deployment is the limited support for target devices, a direct consequence of the effort required for reverse engineering microarchitecture implementations. We introduce ABBY, the first solution for the automated creation of fine-grained leakage models. The main innovation of ABBY is the training framework, which can automatically characterize the microarchitecture of the target device and is portable to other platforms. Evaluation of ABBY on real-world crypto implementations exhibits comparable performance to other state-of-the-art leakage simulators.
Federated Learning (FL) enables collaboratively training Machine Learning (ML) models, where the data is retained locally. Similar to ML, FL has severe security weaknesses that the attackers can exploit, e.g., model inversion and backdoor attacks. Model inversion attacks reconstruct the data from the training datasets, whereas backdoors misclassify only classes containing specific properties, e.g., a pixel pattern. Backdoors are very prominent in FL and aim to poison every client model, while model inversion attacks can target even a single client.
This paper introduces a novel technique to allow backdoor attacks to be client-targeted, compromising a single client while the rest remain unaltered. The attack takes advantage of state-of-the-art model inversion and backdoor attacks. More precisely, we leverage a Generative Adversarial Network to perform a model inversion attack. Afterward, we shadow-train the FL network, in which, using a Siamese Neural Network, we can identify, target, and backdoor the victim's model.
Our attack has been validated using the MNIST, Fashion MNIST, and Extended MNIST datasets under different settings —achieving 99 accuracy on both source (clean) and target (backdoor) classes, opening a novel threat model to be considered in the future.
Social media platforms have been trying to be more transparent about the political ads they run on their platforms, because the Cambridge Analytica scandal revealed that political campaigns are using social media on a large scale. One such transparency effort is the Facebook Ad Library, a public repository of all political ads run on Facebook and Instagram. This library provides journalist and researchers with data to get a better understanding of political advertising and microtargeting on Facebook's platforms. Unfortunately, the Facebook Ad Library only provides estimates and basic information. This paper analyses political ads in more depth, by examining the themes that ads are about. We provide a method to match themes to political Facebook ads and we apply this method to analyse Facebook ad campaigns ran by Dutch political parties during the 2021 Dutch general election.
Web users enter their email addresses into online forms for a variety of reasons, including signing in or signing up for a service or subscribing to a newsletter. While enabling such functionality, email addresses typed into forms can also be collected by third-party scripts even when users change their minds and leave the site without submitting the form. Email addresses---or identifiers derived from them---are known to be used by data brokers and advertisers for cross-site, cross-platform, and persistent identification of potentially unsuspecting individuals. In order to find out whether access to online forms is misused by online trackers, we present a measurement of email and password collection that occurs before the form submission on the top 100,000 websites. We evaluate the effect of user location, browser configuration, and interaction with consent dialogs by comparing results across two vantage points (EU/US), two browser configurations (desktop/mobile), and three consent modes. Our crawler finds and fills email and password fields, monitors the network traffic for leaks, and intercepts script access to filled input fields. Our analyses show that users' email addresses are exfiltrated to tracking, marketing and analytics domains before form submission and without giving consent on 1,844 websites in the EU crawl and 2,950 websites in the US crawl. While the majority of email addresses are sent to known tracking domains, we further identify 41 tracker domains that are not listed by any of the popular blocklists. Furthermore, we find incidental password collection on 52 websites by third-party session replay scripts.
Asuman Senol is a 3rd year PhD student in the COSIC Research Group at KU Leuven supervised by Gunes Acar. Prior to joining COSIC, she has worked as a full-stack developer for 5 years. Her research focuses on online tracking, privacy on the web and she conducts large-scale web measurement studies to analyze online tracking mechanisms.
In this talk, I will present optimization techniques for prime field operations on 64-bit Intel processors.
The presented results are part of our ongoing work on evaluating different variants of CSIDH as a Diffie-Hellman replacement in cryptographic protocols.
In regards to the underlying arithmetic, we want to show that using the AVX2 vector instructions to implement multiprecision arithmetic outperforms the common approach of using 64-bit integer arithmetic.
Since in our work the sizes of p range from 2048 to 9216 bits, this contribution might be of independent interest for the optimization of other schemes, e.g. RSA software.
Machine learning-based side-channel attacks have recently been introduced to recover the secret information from protected software and hardware implementations. Limited research exists for public-key algorithms, especially on non-traditional implementations like those using Residue Number System (RNS). In this study, RNS-based ECC datasets are evaluated using four machine learning classifiers and comparison is provided with existing state-of-the-art template attacks. Moreover, we analyze the impact of raw features and advanced hybrid feature engineering techniques. We discuss the metrics and procedures that can be used for accurate classification on the imbalanced datasets. The experimental results demonstrate that, for ECC RNS datasets, the efficiency of simple machine learning algorithms is better than the complex deep learning techniques when such datasets are limited in size. This is the first study presenting a complete methodology for ML side-channel attacks on public key algorithms and it will be also presented at COSADE 2022.
In this work, we state an attack on the post-quantum KEM Kyber which combines a chosen-ciphertext attack – flipping a single bit of an otherwise valid ciphertext – with a fault that corrects the ciphertext again during decapsulation. By then using the Fujisaki-Okamoto transform as an oracle, we derive inequalities involving secret data, from which we may recover the private key. In addition, we use belief propagation to solve a system of inequalities and thereby improve a known recovery technique to efficiently and practically recover the secret key from a smaller number of inequalities compared to the previous method.
Julius Hermelink is a third year PhD student at Universität der Bundeswehr München supervised by Dr. Thomas Pöppelmann at Infineon Technologies AG. He works on (the combination of) side-channel, fault, and chosen-ciphertext attacks on post-quantum, mostly lattice-based, cryptography. Before that, he did his Master's in mathematics at the University of Munich (LMU), mainly focused on algebraic number theory and algebraic geometry.
Evaluating the side-channel resistance in practice is a problematic and arduous process. Current certification schemes require to attack the device under test with an ever-growing number of techniques to validate its security. In addition, the success or failure of these techniques strongly depends on the individual implementing them due to the fallible and human intrinsic nature of several steps of this path.
To alleviate this problem, we propose a new automatic way to ease one of the most crucial and tedious steps of profiling attacks, i.e., the points of interest (POI) selection, and hence assist the SCA evaluation process. To this end, we propose the usage of Estimation of Distribution Algorithms (EDAs) in the SCA field to perform the point of interest selection step together with the profiling and key recovery steps. This contribution allows an automated attack optimization, avoiding the need to perform various analyses with different POI combinations manually. Our findings are based on repeatable experimental evidence, showing how our approach can straightforwardly provide state-of-the-art results.
Proper attribution is a key factor in digital forensics. In this presentation, we investigate one such area of attribution: file timestamp history. This is crucial for determining whether a file could have been accessible to a suspect.
In particular, we determine file history given a file’s current NTFS timestamps. That is, we infer all possible sequences of file system operations which culminate in the file’s current NTFS timestamps. To this end, we provide an overview of such operations and their effect on timestamps. Based upon this overview, we present a method for reconstructing possible histories of operations.
In this talk, I will give an overview of the theoretical contribution of our paper, “ABE Squared: Accurately Benchmarking Efficiency of Attribute-Based Encryption”, which recently got published at CHES 2022. As the title suggests, ABE Squared is a framework for accurately benchmarking efficiency of attribute-based encryption (ABE), and can be used to fairly compare the efficiency of multiple pairing-based ABE schemes. We do this by considering multiple layers of optimization that are relevant to the implementation of ABE. In contrast to existing work, we explicitly take the design goal of the designer or practitioner into account, who may want to optimize e.g., the encryption algorithm. We show that, for various design goals, ABE schemes may be optimized differently. By extension, such schemes may compare differently as well. To demonstrate our framework, we have implemented and benchmarked the optimized descriptions of several ABE schemes.
We tackle the challenge of reliably determining the geolocation of nodes in decentralized networks, considering adversarial settings and without depending on any trusted landmarks. In particular, we consider active adversaries that control a subset of nodes, announce false locations and strategically manipulate measurements. To address this problem we propose, implement and evaluate VerLoc, a system that allows verifying the claimed geo-locations of network nodes in a fully decentralized manner. VerLoc securely schedules roundtrip time (RTT) measurements between randomly chosen pairs of nodes. Trilateration is then applied to the set of measurements to verify claimed geo-locations. We evaluate VerLoc both with simulations and in the wild using a prototype implementation integrated in the Nym network (currently run by thousands of nodes). We find that VerLoc can localize nodes in the wild with a median error of 60 km, and that in attack simulations it is capable of detecting and filtering out adversarial timing manipulations for network setups with up to 20 % malicious nodes.
Messaging services allow new users that join a service to find existing contacts that already use that service through a process called contact discovery. Existing users are similarly informed of new users that are already on their contact list. Current contact discovery protocols either allow the server to reconstruct the social graph (i.e., the graph describing who is a contact of who), which is a serious privacy issue, or use trusted hardware to prevent this. But even in the latter case, privacy is still at stake: when you join and enable contact discovery, anyone already on the service that has your number on their contact list gets notified that you joined. Even if you don’t know that person, or if it is an ex or former colleague that you long parted with and whose contact details you deleted long ago. To solve this, we propose several mutual contact discovery protocols, that only allow users to discover each other when both are (still) in each other’s contact list. Mutual contact discovery has the additional advantage that it can be implemented in a more privacy friendly fashion (e.g., protecting the social graph from the server) than traditional, one-sided contact discovery, without even relying on trusted hardware.
We revisit the popular adage that side-channel countermeasures must be combined to be efficient, and study its application to bitslice masking and shuffling. Our contributions are threefold. First, we improve this combination: by shuffling the shares of a masked implementation rather than its tuples, we can amplify the impact of the shuffling exponentially in the number of shares, while this impact was independent of the masking security order in previous works. Second, we evaluate the masking and shuffling combination’s performance vs. security tradeoff under sufficient noise conditions: we show that the best approach is to mask first (i.e., fill the registers with as many shares as possible) and shuffle the independent operations that remain. Third, we discuss the challenges for implementing masking and shuffling under low noise conditions: we recall that such algorithmic countermeasures cannot be implemented securely without a minimum level of physical noise. We conclude that with moderate but sufficient noise, the bitslice masking + shuffling combination is relevant, and its interest increases when randomness is expensive and many independent operations are available for shuffling. When these conditions are not met, masking only is the best option. As a side result, we improve the best known attack against shuffling from Asiacrypt 2012, which we use in our concrete evaluations.
Kostas Pagiannopoulos is a researcher on Applied Cryptography, Side-Channel Analysis and Hardware Security. He is currently an assistant professor in the department of informatics of the University of Amsterdam (UvA), in the Netherlands.
He is a member of the System and Networking Lab (SNE), in particular the group for Complex Cyber Infrastructures (CCI) and the group for Security by Design (SBD).
ROCKY is a recently introduced countermeasure against fault attacks for authenticated encryption algorithms. It is based on the random rotation of the internal state of cryptographic primitives that make use of a permutation with shift-invariant (or almost) round functions. ROCKY can be combined with modular redundancy in order to detect faults. In this talk I am going to present the idea of this approach and how it is implemented on FPGA-oriented architectures. Xoodoo, the round function of Xoodyak, is used as a case study. Xoodyak is one of the finalist schemes in the NIST lightweight cryptography standardization competition. Also, I am going to discuss secondary security claims of ROCKY against side-channel attacks.
Testing the resilience of a cryptographic implementation against side-channel attacks requires dedicated equipment and expertise. In recent years, we have seen the emergence of leakage emulators which aim to replace the physical device and measurement setup as an alternative approach for evaluating resilience to side-channel attacks. In this (informal) talk my aim is to introduce these wonderful tools to you, highlight some of the achievements but also some of the limitations. The talk is aimed to non-side channel experts.
When quantum computers will break RSA and ECC, post-quantum cryptography will save the world by providing replacements that (hopefully) won't be broken by overpowered superposition algorithms. Generally, these post-quantum algorithms come in 5 different flavours, with isogeny-based cryptography being one of them.
Unfortunately, isogeny-based cryptography has gained a reputation of being "math heavy", "abstract", and "difficult". In this (relatively informal) lunch talk, I will explain isogeny-based cryptography using rocket science in the hopes of making sure people can actually follow along for more than 5 slides. I will also try to explain some of the results from a recent paper on different methods to
compute such isogenies.
Cyclotomic ideals are special types of ideals in the multivariate polynomial ring over some given base field.
The algebraic structure of a quotient ring modulo a cyclotomic ideal is used in cryptographic protocols like the Subterranean 2.0 cipher suite and in SHA-3.
With the assumption that our base field is algebraically closed, we can apply results from commutative algebra like the Hilberts Nullstellensatz and the Chinese Remainder Theorem to get a better understanding of cyclotomic ideals.
However, we can not always make this assumption, as in cryptography for example we require the base field to be a finite field.
In this talk, we will introduce cyclotomic ideals, and give some analogies regarding the algebraic structure of these ideals over algebraically- and non-algebraically closed basefields.
These analogies turns out to be quite strong, which allows us generalize the results to all fields.
In this talk we investigate a class of large-weight syndrome decoding problems in random ternary codes.
This is the main generic problem underlying the security of the recent Wave signature scheme (Debris-Alazard et al., 2019), and it has so far received limited attention. At SAC 2019 Bricout et al. proposed a reduction to a binary subset sum problem requiring many solutions, and used it to obtain the fastest known algorithm. However -as is often the case in the coding theory literature- its memory cost is proportional to its time cost, which makes it unattractive in most applications.
In this work we propose a range of memory-efficient algorithms for this problem, which describe a near-continuous time-memory tradeoff curve. Those are obtained by using the same reduction as Bricout et al. and carefully instantiating the derived subset sum problem with exhaustive-search algorithms from the literature, in particular dissection (Dinur et al., 2012) and dissection in tree (Dinur, 2019). We also spend significant effort adapting those algorithms to decrease their granularity, thereby allowing them to be smoothly used in a syndrome decoding context when not all the solutions to the subset sum problem are required.
Automated Teller Machines (ATMs) represent the most used system for withdrawing cash. Although ATMs have undergone various technological evolution, Personal Identification Numbers (PINs) are still the most common authentication method for these devices. Still, the PIN mechanism is vulnerable to shoulder-surfing attacks or attacks performed via hidden cameras installed near the ATM to catch the PIN pad. To overcome this problem, people get used to covering the typing hand with the other hand. However, is this behavior sufficient to ensure the security of our PINs?
In this talk, I discuss a work where we created a replica of an ATM and recorded videos of users entering PINs by covering their typing hands to train deep learning models. Once the deep learning models were trained, the attack was carried out on victims entering covered 4-digit and 5-digit PINs. On average, the models succeed in reconstructing 30% of 5-digit PINs and 41% of 4-digit PINs within three attempts, outperforming humans on the same task. The achieved results raise a real threat to the PIN authentication systems, highlighting the need for improved countermeasures to oppose this attack.
Fault Injection (FI) attacks have become a practical threat to modern cryptographic implementations. Such attacks have recently focused more on the exploitation of implementation-centric and device-specific properties of the faults. In this talk, we consider the parallel between SCA attacks and FI attacks; specifically, that many FI attacks rely on the data-dependency of activation and propagation of a fault, and SCA attacks similarly rely on data-dependent power usage. In fact, these are so closely related that we show that existing SCA attacks can be directly applied in a purely FI setting, by translating power FI results to generate FI 'probability traces' as an analogue of power traces. We impose only the requirements of the equivalent SCA attack (e.g., knowledge of the input plaintext for CPA on the first round), along with a way to observe the status of the target (whether or not it has failed and been "muted" after a fault). We also analyze existing attacks such as Fault Template Analysis in the light of this parallel and discuss the limitations of our methodology.
This talk is based on the talk from Ches 2021, but different aspects will be highlighted.
The Montgomery Ladder is a popular choice for implementing scalar multiplications on elliptic curve cryptosystems due to its efficiency. If implemented correctly, the Montgomery Ladder also provides robustness against certain classes of (simple) side-channel analysis attacks (SCA). However on top of implementing Montgomery, ECC designs may additionally integrate further SCA countermeasures to achieve resistance against, e.g., differential SCA attacks.
In this talk we will take a look at one type differential power analysis attack (DPA) on ECC implementations known as the "Comparison of the mean". This attack can be performed with one single execution trace and it works independently of popular (vertical) DPA countermeasures such as key randomization and point blinding. We discuss scenarios on which this attack may be successful and explain how we may use this attack as a tool for assessing the security of our own designs. In particular, this attack may help us identify implementation errors which may not be obvious at first sight.
On a second part of this talk, we focus on the initialisation phase of the Montgomery Ladder and explain how this initialisation affects the vulnerability of ECC implementations against SCA. We propose a re-design of this initialisation phase which leads to more SCA robust implementations and does not imply any additional costs in terms of size, speed or energy consumption of our designs.
Organizations may use artificial intelligence to make decisions about people for a variety of reasons, for instance, to select the best candidates from many job applications. However, AI systems can have discriminatory effects when used for decision-making. AI systems could reject applications of people with a certain ethnicity, while the organization did not plan such ethnicity discrimination. But in Europe, an organization runs into a problem when it wants to assess whether its AI system accidentally leads to ethnicity discrimination: the organization does not know the applicants' ethnicity. In principle, the GDPR bans the use of certain ‘special categories of data’ (sometimes called ‘sensitive data’), which include data on ethnicity, religion, and sexual preference.
This paper asks: What are the arguments for and against a law that creates an exception to the GDPR's ban on collecting special categories of personal data, to enable auditing AI for discrimination? We analyse whether the GDPR indeed hinders the prevention of discrimination, and we map out the arguments in favour of and against introducing such a new exception to the GDPR. The paper discusses European Union law, but the paper can be relevant outside Europe too, as many policymakers in the world grapple with the tension between privacy and non-discrimination policy.
Our main focus is on the ECDLP in the case of elliptic curves defined over prime-degree binary extension fields, using the index calculus attack. A crucial step of this attack is solving the point decomposition problem, which consists in finding zeros of Semaev’s summation polynomials and can be reduced to the problem of solving a multivariate Boolean polynomial system. To this end, we encode the point decomposition problem as a logical formula and define it as an instance of the SAT problem. Then, we propose an original XOR-reasoning SAT solver, named WDSat, dedicated to this specific problem.
Classic McEliece is a finalist in the NIST Post-Quantum standardization process. Although the final decision is approaching, many aspects of the security of the finalists remain unanswered - in particular the security of KEMs when side channel information leaks.
I will talk about an attack we developed against the the official FPGA-implementation of the NIST finalist “Classic McEliece”.
The attack is basically an SCA-assisted Information Set Decoding that we enhance using a new technique that we call iterative chunking. The new technique enables us to significantly reduce the number of required side channel measurements. For example, for the 256-bit security parameter set kem/mceliece6960119 of “Classic McEliece”, we improve the basic attack that requires 5415 measurements to less than 562 measurements on average to mount a successful plaintext-recovery attack. Further reductions can be achieved at the price of increasing the cost of the ISD computations. We confirm our findings by practically mounting the attack on the official FPGA-implementation of “Classic McEliece” for all proposed parameter sets.
Ever since the days of 2G GSM networks IMSI-catchers have been one of the most well-known attacks in mobile networks. They allow an attacker to track and localize users without easily noticeable signs to the user. Because they abuse legitimate procedures the prevention of these attacks requires some changes to the protocol. The fifth generation of mobile networks has finally taken these steps and introduced measures to prevent IMSI-catching. In this talk we explore the workings of an IMSI-catcher and how 5G solved this problem.
In symmetric cryptography, doubling the sizes of keys is often assumed to be a sufficient protection against quantum adversaries. This is because Grover's quantum search algorithm, which can be used for generically recovering the key, is limited to a quadratic speedup.
In this talk, we will study this key-recovery problem for block cipher constructions in the ideal model, such as the Even-Mansour or FX ciphers.
In the superposition setting (a strong model of quantum attackers), some of these constructions are completely broken, even though they have classical security proofs. But attacks using only classical queries, which are deemed more realistic, have remained much more limited to date. As they were reaching quadratic speedups at most, they confirmed so far the intuition that security levels should just be doubled in general.
We will show that this is not always the case, by presenting a symmetric block cipher design with: 1. a security bound of 2^{2.5n} against classical adversaries, and: 2. a quantum attack in time roughly 2^n, that uses classical queries only. This gives, for the first time, a proven 2.5 speedup on a quantum attack in the classical query model.
I would like to take you on a (condensed) journey through my life. What triggered my passion for computers and cyber security, the war stories I collected during my teens, the decision to follow computer science and later the Kerckhoffs program at the RU, my career choices and finally what it is like to start your own cyber security company. Spoiler alert: crazy fun, but a hell of a lot of work.
We would like to share a bit more and get your opinion on what we are working on with Codean; the Security Review Environment (SRE). We are particularly keen on working on three rather complex features for the SRE, namely: Symbiotic Taint Analysis, formalized vulnerabilities for machine learning, and heuristics to identify security relevant points of interests in source code. If you were thinking about doing a combined internship/(master) thesis, any one of those could serve as your thesis topic during your internship at Codean.
Hopefully this will give you some insights in the opportunities and choices you can make now and during the early days in your work life. See you Friday!
In this talk I will present an overview of my past research on online tracking, anonymous communications and dark (design) patterns.
The talk will cover empirical measurements of advanced tracking techniques such as browser fingerprinting; traffic analysis in Tor anonymous communications network; and security and privacy risks posed by IoT devices and mobile apps. I will also present my research on identifying dark patterns at scale, and will conclude by discussing potential future work.
We introduce SPEEDY, a family of ultra low-latency block ciphers. We mix engineering expertise into each step of the cipher's design process in order to create a secure encryption primitive with an extremely low latency in CMOS hardware. The centerpiece of our constructions is a high-speed 6-bit substitution box whose coordinate functions are realized as two-level NAND trees. In contrast to other low-latency block ciphers such as PRINCE, PRINCEv2, MANTIS and QARMA, we neither constrain ourselves by demanding decryption at low overhead, nor by requiring a super low area or energy. This freedom together with our gate- and transistor-level considerations allows us to create an ultra low-latency cipher which outperforms all known solutions in single-cycle encryption speed. Our main result, SPEEDY-6-192, is a 6-round 192-bit block and 192-bit key cipher which can be executed faster in hardware than any other known encryption primitive (including Gimli in Even-Mansour scheme and the Orthros pseudorandom function) and offers 128-bit security. One round more, i.e., SPEEDY-7-192, provides full 192-bit security. SPEEDY primarily targets hardware security solutions embedded in high-end CPUs, where area and energy restrictions are secondary while high performance is the number one priority.
Despite its popularity, Machine Learning (ML)-based malware detectors are still known to be vulnerable to adversarial examples. Adversaries aim to evade ML-based malware detectors by morphing existing malware into adversarial examples through a series of advanced manipulations. Over the last years, several studies have presented evasion attacks that aim to make adversarial examples stealthy against ML-based detection. However, the proposed solutions either assume the attacker to have advanced knowledge of the ML model, or the proposed adversarial examples fail to execute in the real world. In this talk, I will present a novel, practical evasion attack that can generate real-world adversarial examples in a black-box setting, while remaining undetected to state-of-the-art ML-based malware detectors.
This informal lunch talk gives an overview of my "human-centered cryptography" research line, which I am currently establishing at the Digital Security group. I will briefly discuss the topics (1) authentication, (2) encryption and (3) digital signatures from a human-centered perspective. I will start with examples of past work on authentication and encryption (e.g., IRMA, cryptify.nl). Then, I will provide a glimpse into our ongoing work that uses identity-based and attribute-based encryption for emails (IDlock). Finally, I will sketch out our plans for bringing user-friendly digital signatures to social media and email (Twid, SocialSign).
On Friday the Digital Security group will have a nice hike through the hills near Nijmegen. In this talk I'll describe some of the sights and highlights of the routes with geological, topographical and historical information.
This should also be interesting if you're not participating in the hike.
Side-channel attacks are a threat to secrets stored on a device, especially if an adversary has physical access to the device. As an effect of this, countermeasures against such attacks for cryptographic algorithms are a well-researched topic. In this work, we deviate from the study of cryptographic algorithms and instead focus on the side-channel protection of a much more basic operation, the comparison of a known attacker-controlled value with a secret one. Comparisons sensitive to side-channel leakage occur in tag comparisons during the verification of message authentication codes (MACs) or authenticated encryption, but are typically omitted in security analyses. Besides, also comparisons performed as part of fault countermeasures might be sensitive to side-channel attacks. In this work, we present a formal analysis on comparing values in a leakage resilient manner by utilizing cryptographic building blocks that are typically part of an implementation anyway. Our results indicate that there is no need to invest additional resources into implementing a protected comparison operation itself if a sufficiently protected implementation of a public cryptographic permutation, or a (tweakable) block cipher, is already available. We complement our contribution by applying our findings to the SuKS message authentication code used by lightweight authenticated encryption scheme ISAP, and to the classical Hash-then-PRF construction.
This presentation presents the shortcomings of the original screen gleaning paper. In order to cope with these shortcomings, a new detection method had to be developed: edge-detection. The edge-detection method is designed for side-channel analysis on channels that transmit bitstreams. The edge-detection method makes use of the bandwidth between multiple higher-order harmonics of the original frequency at which the bitstream is being transmitted. With this method, peaks will occur at the transitions of the signal, allowing the user to detect the edges of the bitstream. This presentation will show that the location of these transitions is enough to confidently interpret the information in the bitstream. The advantage of only using higher- order harmonics is that high-frequency antenna designs can be used, which are in general more scalable. Edge-detection can be applied on different use cases related to side-channel analysis attacks. This presentation gives a simplified example of such a use case by using HDMI cables with differential signaling.
As my research is a bit orthogonal to the interests of the group, in this talk I will give a more general overview of some of the stuff I've worked on. I'll start by giving an introduction to quantum computing, focussing on the use of quantum circuits. Then I'll show how we can view quantum computations as ZX-diagrams, a pictographic representation of computations. Using the ZX-calculus we can then reason entirely graphically about quantum computations. Given there's enough time, I will demonstrate the power of this method by giving a diagrammatic proof of the Gottesman-Knill theorem, a famous result in quantum computing that says a certain useful class of quantum circuits can be efficiently classically simulated.
In this talk I present the results of a paper that is based on my teaching experience with the course Semantics and Rewriting. One of the things that students need to learn in that course is proving termination of a term rewrite system using the lexicographical path order method (LPO). I noticed in the past that students consider this a difficult proof method, due to the recursive definition and the nested structure in natural language. Therefore, I introduced some new, more formal methods, in tree-style and in Fitch-style, hoping that this would make it easier for students to actually formulate these proofs.
However, besides being able to formulate this kind of proofs on paper, it is also important that students can formulate the same kind of proofs in the Cirrus exam environment, so that is also addressed.
The exam of this course is on June 22, so hopefully I will be able to tell you on June 25 how the students performed at this particular exercise.
For the past quarter of a century, we've been using contact and contactless smartcards for just about anything. But the days of the smartcard are numbered: soon we will only use our mobile phones for everything instead.
This does raise questions on how to design and evaluate smartphone-based authentication solutions. There is a larger design space of solutions (which includes the choices between centralising vs decentralising that Eric Verheul discussed last week), with more and more complex components, for use in online and/or offline settings. How do we judge or compare the security of solutions, let alone
certify them in some rigorous way?
It is not clear what the right answers or even the right questions here are. Goal of this talk is to stir up some discussion about all this.
Since the start of 2021, Dutch citizens can log in to public service providers using their contactless Dutch identity card by presenting it to DigiD, the centralized authentication provider of Dutch government. In this talk I will explain the cryptographic setup of this setup which I designed around 2014. One of the key properties of this setup is its implementation of k-anonymity. While DigiD knows which service provider a card user wants to authenticate to and delivers its social security number (BSN) to it, DigiD is not able to recognize the user. I will indicate this provides a good trade-off between the here conflicting properties of security and privacy. The new Dutch eID-card shows that the use of centralized authentication is not necessarily privacy-unfriendly which is a common misconception in (academic) literature.
In this talk I will present an update of our work on training robust machine learning (ML) models and discuss the implications of robustness to security of ML. The focus will be on training robust ML models with small data sets, although a larger discussion about ML robustness in the context of ML engineering and ML policy will be provided (i.e., within the EU guidelines for trustworthy AI).
Small scale elections, about your street or a nearby park as organized by municipalities, are now often organized ad hoc: without proper access control, or using expensive snail mail. We set out to improve on this situation, i.e. bringing those kind of elections in the modern era. In our approach, a voter registers at our first server. If the voter is eligible (lives in a certain street), the voter gets an anonymous voting number from this server by using random-blind attributes (as just implemented by IRMA). On the next server, that anonymous voting number can be used to actually cast a vote. After the election a public record with votes signed with the anonymous voting number is published, so that voters can verified the results. Voters are anonymous as long as the two servers are properly separated and secured. Currently user testing is underway in Groningen and Amsterdam. In the talk, the trade-offs are discussed, an overview of the security design is given, and the first interesting insights of the user tests are shared.
WebRTC revolutionised videoconferencing on the Web. During this short introduction on WebRTC, you will discover the different technologies that enable modern applications such as Zoom, Jitsi, BigBlueButton and Skype to manage real-time audio and video communications over an unstable and chaotic Internet.
Are you no longer hearing your supervisor on Skype? Are you sounding like in a submarine on Discord? The goal of this presentation is to explain what is happening behind the scene and to give advice and tools to understand these issues.
WireGuard is a modern VPN protocol invented by Donenfeld. It was presented at NDSS 2017 and is now part of the mainline Linux kernel. WireGuard improves on solutions like OpenVPN or IPsec in terms of performance, code complexity, and security. However, protection against quantum attackers was considered out of scope in the design.
In this talk I will present our proposal for a post-quantum variant of the WireGuard handshake, which will also be presented at Oakland 2021. This proposal is obtained by first moving the Diffie-Hellman-based WireGuard handshake to a generic KEM-based construction and then carefully instantiating the KEMs to retain WireGuard's performance and security advantages to the maximum extent.
In cryptography, authenticity of a message is as important as its confidentiality.
Message Authentication Code (MAC) compresses a variable length message to a fixed length value and outputs a tag, which is then verified to check the message's authenticity.
MAC functions, while sometimes used standalone, are also used in conjunction with an encryption algorithm, i.e., in authenticated encryption.
We wish to design a keyed compression function for use in a MAC function that is at the same time efficient and offers a high level of collision resistance.
There are several ways to build a MAC function. Block cipher based: CMAC, PMAC, hash function based: HMAC, KMAC.
Such construction are computationally heavy. But, we have lighter alternative based on multiplication in some large finite field: GMAC over a field of characteristic 2 and POLY-1305 over a prime field. And then there is NH-hash, which uses multiplication without any modular reduction.
NH-hash uses integer multiplication over 32-bits and is super fast on many CPUs thanks to the presence of fast instructions for large integer Multiplication (16-bit or 32-bit).
NH-Hash achieves 64 bits of collision-resistance security by combining 4 instances of a basic function called NH, and costs 4 multiplications and 16 additions per 64-bit input block.
Taking NH-Hash as a starting point we have designed a new function called the Multi-mixer.
Its ambition is to have 128 bits of collision resistance security and it has a workload of 2 multiplications and 10 additions per 64-bit input block: so twice the security for about half the workload pf that of NH.
Deep neural networks have become the state-of-the-art method when a profiled side-channel analysis is performed. Their popularity is mostly due to neural nets overcoming some of the drawbacks of ``classical'' side-channel attacks, such as the need for feature selection or waveform synchronization, in addition to their capability to bypass certain countermeasures like random delays. To design and tune a neural network for side-channel analysis systematically is a complicated task. There exist hyperparameter tuning techniques which can be used in the side-channel analysis context, like Grid Search, but they are not optimal since they usually rely on specific machine learning metrics that cannot be directly linked to e.g. the success of the attack.
We propose a customized version of an existing statistical methodology called Six Sigma for optimizing the deep learning-based side-channel analysis process. We demonstrate the proposed methodology by successfully attacking a masked software implementation of AES.
Automatic exploit generation is a relatively new area of research. Work in this area aims to automate the manual and labour intensive task of finding exploits in software. With Exploit Logic, we are aiming to lay the foundation for a new automatic exploit generation system. Exploit Logic, which formally defines the relation between exploits and the preconditions which allow them to occur. This relation is used to calculate the search space of preconditions for a specific program and exploit. This presentation is an introduction into Exploit Logic, and the current state of my research.
Base extensions are a substantial part of modular reduction using residue number system in the context of asymmetric cryptography. In this presentation, we discuss the base extension proposed by Djath, Bigou and Tisserand [Hierarchical approach in RNS base extension for asymmetric cryptography. In 26th IEEE Symposium on Computer Arithmetic (ARITH), pp. 46–53. IEEE, 2019]. Our proposed algorithm is based on a hierarchical approach in the Chinese-remainder-theorem computation. We demonstrate that the theoretical computation cost of the base-extension operation is reduced and the inherent parallelism of residue number system internally preserved. For typical field sizes employed in elliptic curve cryptography, FPGA implementations using high-level synthesis tools show that the reduction of the theoretical computation cost translates into area and time gains.
The need for power- and energy-efficient computing has resulted in aggressive cooperative hardware-software energy management mechanisms on modern commodity devices. Most systems today, for example, allow software to control the frequency and voltage of the underlying hardware at a very fine granularity to extend battery life. Despite their benefits, these software-exposed energy management mechanisms pose grave security implications that have not been studied before.
Here I will present the CLKSCREW attack, a new class of fault attacks that exploit the security- obliviousness of energy management mechanisms to break security. A novel benefit for the attackers is that these fault attacks become more accessible since they can now be conducted without the need for physical access to the devices or fault injection equipment. We demonstrate CLKSCREW on commodity ARM/Android devices. We show that a malicious kernel driver (1) can extract secret cryptographic keys from Trustzone, and (2) can escalate its privileges by loading self-signed code into Trustzone. As the first work to show the security ramifications of en- ergy management mechanisms, we urge the community to re-examine these security-oblivious designs.
Authors: Adrian Tang, Simha Sehumadhavan and Salvatore Stolfo
In the world of smart metering, every byte transmitted costs money, so there is a desire to minimize the amount of data sent by smart meters. Compression may seem like a good option for this, but it may have privacy implications. In this talk I will show that applying straightforward compression to encrypted energy use data from smart meters in the Netherlands would allow an attacker performing traffic analysis to reveal private information.
The first part of the talk briefly introduces the smart metering landscape in the Netherlands, explaining the influence of privacy and security on certain design choices. Armed with this background knowledge we then take a real-world dataset, encode it using the industry-standard DLMS/COSEM protocol, and analyze the effects that the compression used in DLMS/COSEM has on the size of messages transmitting this dataset. We find that using this compression mechanism would allow an attacker to infer e.g. when a household is on vacation. To solve this problem, we propose a smarter encoding of the data that still achieves very good data savings, without leaking the energy use pattern of households.
Subterranean 2.0 has been selected as a candidate in the second round of NIST’s lightweight cryptography standardization process. Recent studies have shown that Subterranean is performing well in terms of hardware compared to other candidates. It is clear that a cryptographic algorithm requires high security in addition to performance. As a result, we need to check whether an algorithm is secure enough. For this purpose, we can use the present cryptanalysis; and differential cryptanalysis is one of the strongest. In the case of cryptographic primitives with weak alignment (as Keccak and Subterranean), the estimation of the probability of the differential trails is obtained via computer-assisted proofs and dedicated tools. In this presentation, I will present a detailed analysis of the differential trail search in Subterranean.
Ronan Ó Fathaigh, Tom Dobber, James Shires, and Frederik Zuiderveen Borgesius will present a draft article, called 'Microtargeted propaganda by foreign actors: An interdisciplinary exploration’.
The article discusses a problem that has received scant attention in literature: microtargeted propaganda by foreign actors. Microtargeting involves collecting information about people, and using that information to show them targeted political advertisements. Such microtargeting enables advertisers to target ads to specific groups of people, for instance people who visit certain websites, forums, or Facebook groups. This article focuses on one type of microtargeting: microtargeting by foreign actors. For example, Russia has targeted certain groups in the US with ads, aiming to sow discord. Foreign actors could also try to influence European elections, for instance by advertising in favour of a certain political party. Foreign propaganda possibilities existed before microtargeting. This article explores two questions. In what ways, if any, is microtargeted propaganda by foreign actors different from other foreign propaganda? What could lawmakers in Europe do to mitigate the risks of microtargeted propaganda?
It has been more than two years since I started at Radboud University. In this talk, I will introduce myself and explain a little bit about the EU project I have been involved. I will also present the work on Threshold Multi-party Private Set Intersection, based on a use case addressed in the project. The cryptographic protocol was designed using Bloom filters. Our approach has linear complexity and improves state of the art significantly for large numbers of (corruptible) parties, for instance, when there are 10 or more parties who have 64 elements in their sets.
This informal lunch talk will give some insight into how cryptography is currently used in the real world and why that is problematic.
I'll briefly present "cryptographic incidents" that happened to me in my work life. I'll also show how they are exploitable and how this impacts the overall security of a product.
The first part of this talk is a gentle introduction on how AI techniques can be applied in static source code analysis to detect vulnerabilities. We will briefly look at different methods that have been tried by various researchers and give an overview of our research. In the second part of the talk we will focus on one of our research projects: detecting cross-site scripting and SQL injection vulnerabilities in PHP web applications using machine learning. In this project, we assemble a dataset of vulnerable PHP code samples and their patched versions. We extract features from these code samples by applying data-flow analysis techniques, and use these features in machine learning to train various probabilistic classifiers. Our experiments show that we outperform other approaches for vulnerability detection in PHP code, and we were able to detect a previously unknown vulnerability in an open-source web application for photo sharing.
The OpenSky Network is a non-profit association based in Switzerland. It aims at improving the security, reliability and efficiency of the air space usage by providing open access of real-world air traffic control data to the public. The OpenSky Network consists of a multitude of sensors connected to the Internet by volunteers, industrial supporters, and academic/governmental organizations. All collected raw data is archived in a large historical database. The database is primarily used by researchers from different areas to analyze and improve air traffic control technologies and processes.
Inspired by Paul Graßl's: Dark and Bright Patterns in Cookie Consent Requests this lunchtalk will demonstrate how the new Journal of Cross-disciplinary Research in Computational Law (CRCL) [0] 'did' its data protection statement. Mireille will discuss the relevant legal framework (GDPR and ePrivacy) to explain the crucial differences and overlaps between (1) personal data (2) technical, functional and other cookies, and (3) other tracking mechanisms, mapping them to the relevant legal constraints, followed by a brief overview of the data protection statement (which is mistakenly qualified as a privacy statement due to the affordances of the OJS platform). Paulus Meessen and Laurence Diver (COHUBICOL researchers in Nijmegen and Brussels) will explain what this means on the backend system. [0] : https://journalcrcl.org/
Lattice reduction is the task of finding a basis of short and somewhat orthogonal vectors of a given lattice. In 1985 Lenstra, Lenstra and Lovasz proposed a polynomial time algorithm for this task, with an application to factoring rational polynomials. Since then, the LLL algorithm has found countless application in algorithmic number theory and in cryptanalysis. There are many analogies to be drawn between Euclidean lattices and linear codes over finite fields. In this work, we propose to extend the range of these analogies by considering the task of reducing the basis of a binary code. In fact, all it takes is to choose the adequate notion mimicking Euclidean orthogonality (namely orthopodality), after which, all the required notions, arguments, and algorithms unfold before us, in quasi-perfect analogy with lattices.
Title: Dark and Bright Patterns in Cookie Consent Requests
Description: Dark patterns are (evil) design nudges that steer people's behaviour through persuasive interface design. Increasingly found in cookie consent requests, they possibly undermine principles of EU privacy law. In two preregistered online experiments we investigated the effects of three common design nudges (default, aesthetic manipulation, obstruction) on users' consent decisions and their perception of control over their personal data in these situations. In the first experiment (*N* = 228) we explored the effects of design nudges towards the privacy-unfriendly option (dark patterns). The experiment revealed that most participants agreed to all consent requests regardless of dark design nudges. Unexpectedly, despite generally low levels of perceived control, obstructing the privacy-friendly option led to more rather than less perceived control. In the second experiment (*N* = 255) we reversed the direction of the design nudges towards the privacy-friendly option, which we title "bright patterns". This time the obstruction and default nudges swayed people effectively towards the privacy-friendly option, while the result regarding perceived control stayed the same compared to Experiment 1. Overall, our findings suggest that many current implementations of cookie consent requests do not enable meaningful choices by internet users, and are thus not in line with the intention of the EU policymakers. We also explore how policymakers could address the problem.
Mathematics plays an important role in cryptography and in computer science, in general. It is a crucial vehicle for logical reasoning, for insight and for problem solving. Our university programs aim to use and further develop students' mathematical skills in this direction. However, students often arrive to the university with a simplistic image about mathematics that is completely misaligned with actual mathematics: "memorise and apply quickly". Not only their view but also their skills inhibit them in pursuing a truly successful computer science study. How can we bridge the gap between actual mathematics and this narrow, computation-centred idea about mathematics? Open Maths is just about that. In this talk, I will give an overview about the Open Maths courses, about my experiences of the Comenius Teaching Fellowship and about further possibilities.
In the last half year, we have implemented the Dilithium (round 2) scheme for the ARM Cortex M4 and ARM Cortex M3. Particularly for Cortex M3 this is exciting, because the M3 32-bit multiply operations are not constant time. We have looked at different techniques for optimizing both performance and stack space. The result was an implementation for M4 that was around 20% faster than previous works; and for both platforms a memory usage of less than {40,54,71} kB for Dilithium{2,3,4} for sigining, and less than 11 kB for verification.
Along with the rapid development of technologies in information and communications, cyber threats not only have grown significantly in numbers but have also become considerably sophisticated. One of the efficient mechanisms that can bring a high level of security and mitigate the vulnerabilities present in computer networks is Intrusion Detection Systems (IDSs). In order to achieve optimal results, the decision-making processes which are used in IDSs to identify threats need to be smarter as malicious actors are constantly trying to evolve their attacks using sophisticated techniques. In this talk, I will present my recent research on the intersection of AI and IDSs, and in particular, how AI can help thwart/mitigate network-based attacks.
In my thesis, I looked at why certain algebraic structures (Campana curves and semi-integral points) are a useful tool for open problems in arithmetic geometry. I translated a hypothesis over number fields to a hypothesis over function fields. In this talk, I'll explain the basics of arithmetic geometry, how it gives us insight into number theory and algebraic geometry, and where such insight is relevant for cryptography.
It's possible to make TLS 1.3 post-quantum secure by just plugging in post-quantum key exchange and a post-quantum signature scheme. But PQ signatures tend to be quite big and slow. KEMTLS is our proposal for a post-quantum secure variant of TLS that authenticates by using KEMs instead of the handshake signature. With a trick to preserve the ability to allow the client to send the request after the server sends its certificate: using KEMs instead of signatures doesn't take more round trips for this first message.
We compare a few instantiations of KEMTLS. Optimised for communication size, KEMTLS, with SIKE for KEX and handshake authentication, GeMSS for the CA certificate and a custom XMSS for optional intermediate certificates, requires less than half the bandwidth of a post-quantum TLS 1.3 using Falcon for the handshake signature. When picking primitives for speed, KEMTLS reduces the amount of server CPU cycles by up to 90% compared to an equivalent post-quantum instantiation of TLS 1.3, as well as reducing the time before the client can send its first application data.
Voice over LTE (VoLTE) is a packet-based telephony service seamlessly integrated into the Long Term Evolution (LTE) standard and deployed by most telecommunication providers in practice. Due to this widespread use, successful attacks against VoLTE can affect a large number of users worldwide. In this work, we introduce ReVoLTE, an attack that exploits an LTE implementation flaw to recover the contents of an encrypted VoLTE call, hence enabling an adversary to eavesdrop on phone calls. ReVoLTE makes use of a predictable keystream reuse on the radio layer that allows an adversary to decrypt a recorded call with minimal resources. Through a series of preliminary as well as real-world experiments, we successfully demonstrate the feasibility of ReVoLTE and analyze various factors that critically influence our attack in commercial networks. For mitigating the ReVoLTE attack, we propose and discuss short- and long-term countermeasures deployable by providers and equipment vendors.
An Information Retrieval model estimates the relevance of information objects to an information need. Originally, this was achieved primarily by comparing the information representations of documents (usually text) with those of the requests (as typed by the end user), and, usually, a straightforward way to measure similarity between these representations determined the degree of relevance between them. In recent years however, with the opportunities that data processing at large have to offer, this relevance estimation takes into account additional factors. First, "learning to rank" approaches adapt the matching process to include (assumingly related) previous actions of this end user on the same system, as well as information seeking behaviors of all the users of the system. Second, information representations are not based on a single information object any more, but on the corpus as a whole. Together, these trends make the retrieval process much more vulnerable for implicit biases that sneak into the retrieval process. I will discuss some of these biases, what we try to achieve when considering these in our group, and why a further understanding of biases in search is only getting more important with the current rise of interested in conversational search.
The original idea of profiling implies attacking one device with a leakage model generated from an "identical copy", but this concept cannot be always enforced. The leakage model is commonly generated with traces from an "open device", assuming that a model which works for one device should work for another copy as well. In practice, applying a leakage model to a different copy of the same device (commonly called portability) is a hard problem to deal with, as intrinsic differences in the devices or the experimental setups used to obtain the traces cause behavioral variations which lead to an unsuccessful attack. Thus, we propose a novel similarity assessment technique that allows evaluators to quantify the differences among various copies of the same device. Additionally, we support this technique with actual experiments to show that this metric is directly related to the portability issue. Finally, we derive a method that improves the performance of template attacks.
Since its selection as the winner of the SHA-3 competition, Keccak, with all its variants, has found a large number of applications. It is, for instance, a common building block in schemes submitted to NIST's post-quantum cryptography project. In many of these applications, Keccak processes ephemeral secrets. In such a setting, side-channel adversaries are limited to a single observation, meaning that differential attacks are inherently prevented. If, however, such a single trace of Keccak can already be sufficient for key recovery has so far been unknown.
In this paper, we change the above by presenting the first single-trace attack targeting Keccak. Our method is based on soft-analytical side-channel attacks and, thus, combines template matching with message passing in a graphical model of the attacked algorithm. As a straight-forward model of Keccak does not yield satisfactory results, we describe several optimizations for the modeling and the message-passing algorithm. Their combination allows attaining high attack performance in terms of both success rate as well as computational runtime.
We evaluate our attack assuming generic software (microcontroller) targets and thus use simulations in the generic noisy Hamming-weight leakage model. Hence, we assume relatively modest profiling capabilities of the adversary. Nonetheless, the attack can reliably recover secrets in a large number of evaluated scenarios at realistic noise levels. Consequently, we demonstrate the need for countermeasures even in settings where DPA is not a threat.
We've been forced to switch all education to an online model. For lectures, many people have taken to using videoconferencing software, others have tried their hand at Brightspace Virtual Classroom. However, because Object Oriented Programming has over 400 enrolled students, with lecture participation well over 150, none of these options were available to us. So I decided to use a piece of software and a website purposely designed for 1-to-many broadcasting: Open Broadcaster Software & Youtube.
Unfortunately, this would not work for the computer practical sessions, where usually, students work in pairs and student assistants are readily available to answer questions on-the-fly. Brightspace discussion boards weren't going to cut it. But, thanks to the help of my student assistants, when everybody else was cancelling lectures and practicals on the 13th of March, we gave our first in a series of successful online practical sessions on Discord.
This talk, set up as how one of my currently "normal" lectures looks now, will dive into the details of how I use these tools, hopefully to help you see whether they or not they are suited for your courses. My ulterior motive is that I have very strong opinions on tooling we really need for effective online education, and currently I only find those needs fulfilled by software that is not provided within the university. I hope that this may convince more people of the necessity of certain aspects of the tools I use, so that we can collectively ask for them, or tools like them, to be integrated in our own infrastructure.
The talk itself will livestream on https://youtu.be/dfBkuvhvJLw. We can use the text chat next to youtube to ask questions, or to signal someone would like to ask a spoken question in the Jitsi session.
Modern ciphers are based on the iterative application of an invertible round function that consists of a number of steps.
Clearly, there is a tension between security and efficiency when choosing the number of rounds.
Making a good choice requires a good understanding of the interplay between design decisions and the applicability of certain classes of attacks.
In this talk, we look at the relation between the partitioning of the index space as derived from the S-box layer and the applicability of differential and linear cryptanalysis to the round function.
Throughout, we consider round functions with the following steps:
1) An S-box layer;
2) A linear layer;
3) A round key/constant addition.
In particular, we consider the round functions of the ciphers Rijndael (AES), Saturnin, Spongent, and Xoodoo.
First, we cover some basic definitions and present the alignment properties of the steps.
Second, we start from the familiar notion of Hamming weight, which reveals the mixing power of the linear layer.
Contrasting this with the box weight, we discuss an effect that we call bit huddling.
Next, we introduce the lambda-box histogram of the linear layer, which reveals its so-called scattering power.
Finally, we take the S-box layer into account and discuss the differential propagation and correlations properties.
nice opportunity and showcase for Privacy-by-Design, or is it a
privacy disaster that we should fight at all cost? Or is the
discussion of privacy only a distraction from the more fundamental
issue that the whole enterprise is just techno-solutionism and that
this piece of technology is not going to solve anything?
Mireille Hildebrandt and Jaap-Henk Hoepman will argue the positions
outlined in
https://drive.google.com/file/d/1OQg2dxPu-x-RZzETlpV3lFa259Nrpk1J/view
and http://blog.xot.nl/
We present a methodology for generating a characterization of the memory used by an assembly program, as well as a formal proof that the assembly is bounded to the generated memory regions. A formal proof of memory usage is required for compositional reasoning over assembly programs. Moreover, it can be used to prove low-level security properties, such as integrity of the return address of a function. Our verification method is based on interactive theorem proving, but provides automation by generating pre- and postconditions, invariants, control-flow, and assumptions on memory layout. As a case study, three binaries of the Xen hypervisor are disassembled. These binaries are the result of a complex build-chain compiling production code, and contain various complex and nested loops, large and compound data structures, and functions with over 100 basic blocks. The methodology has been successfully applied to 251 functions, covering 12,252 assembly instructions.
Substitution-Permutation Networks (SPNs) are an approach that is popular since the mid 1990s. Such networks take a block of the plaintext and the key as inputs, and apply several alternating rounds of substitution Boxes (S-Boxes) and permutation boxes to produce the ciphertext block. One of the new directions in the design of these round functions is to reduce the substitution (S-Box) layer from a full one to a partial one, uniformly distributed over all the rounds. LowMC and Zorro are examples of this approach.
A relevant freedom in the design space is to allow for a highly nonuniform distribution of S-Boxes. However, choosing rounds that are so different from each other is very rarely done, as it makes security analysis and implementation much harder. We develop the design strategy Hades and an analysis framework for it, which despite this increased complexity allows for security arguments against many classes of attacks, similar to earlier simpler SPNs. The framework builds up on the wide trail design strategy for SPNs, and it additionally allows for security arguments against algebraic attacks, that are much more of a concern when algebraically simple S-Boxes are used.
As a concrete use-case application, we present a candidate cipher natively working over GF(p) for securing data transfers with distributed databases using secure multiparty computation (MPC). Compared to the currently fastest design MiMC, we observe significant improvements in online bandwidth requirements and throughput with a simultaneous reduction of preprocessing effort, while having a comparable online latency.
Differential power analysis (DPA) exploits the dependence between the power consumption of hardware devices and the data processed by the cryptographic function that has been implemented. We analyzed in depth the security of a hardware implementation of the chi function used in cryptographic algorithms like Keccak and Ascon, against DPA, with simulations. Our aim is to quantify the amount of effort that an adversary should put to retrieve the secret by determining the number of power traces needed. In our simulations, the power consumption is modeled as coming entirely from the registers. We extended the previous analysis performed by the Keccak team of the chi function to retrieve more secret bits.
We present our discovery of a group of side-channel vulnerabilities
in implementations of ECDSA in the widely used Atmel AT90SC
FIPS 140-2 certified chip and five cryptographic libraries
(libgcrypt, wolfSSL, MatrixSSL, SunEC/OpenJDK/Oracle JDK, Crypto++).
Vulnerable implementations leak the bit-length of the scalar used
in scalar multiplication via timing information. We utilize this
information leak to form a lattice attack and recover the full
private key after observing thousands of signing operations.
https://minerva.crocs.fi.muni.cz
The NIST call for lightweight cryptography has gotten quite some attention. One of the candidates, Pyjamask - a block cipher, was investigated over the summer by Yann Rotella, Christoph Dobraunig and the speaker. In this talk a short summary will be given of the method of our attack and the results. We found a theoretical key-recovery attack that reaches the full-round version of Pyjamask-96.
We present two tweakable wide block cipher modes from doubly-extendable cryptographic keyed (deck) functions and a keyed hash function: double- decker and docked-double-decker. Double-decker is a direct generalization of Farfalle-WBC of Bertoni et al. (ToSC 2017(4)), and is a four-round Feistel network, docked-double-decker is a modified variant. We prove these constructions secure using a new security model for keyed hashes namely the blinded keyed hashing model. We demonstrate that blinded keyed hashing is more general than the conventional notion of XOR-universality, and that it allows us to instantiate our constructions with keyed hash functions that have a very strong claim on bkh security but not necessarily on XOR-universality, such as Xoofffie (ePrint 2018/767).
Deep Neural Networks have shown human-like accuracies for the classification tasks, like image recognition. Along with the development also comes new security risks. One of them is the implementation of triggers in Neural Networks. They can be used to control the outcome of a network in advance. In this talk, a method is discussed to detect the trigger. During my internship, I have researched the limitations of this detection method.
We present efficient and compact hardware/software co-design implementations of the Supersingular Isogeny Key Encapsulation (SIKE) protocol on field-programmable gate arrays (FPGAs). In order to be better equipped for different post-quantum scenarios, our architectures were designed to feature high-flexibility by covering all the currently available parameter sets and with support for primes up to 1016 bits. In particular, any of the current SIKE parameters equivalent to the postquantum security of AES-128/192/256 and SHA3-256 can be selected and run on-the-fly. This security scalability property, together with the small footprint and efficiency of our architectures, makes them ideal for embedded applications in a post-quantum world. In addition, the proposed implementations exhibit regular, constant-time execution, which provides protection against timing and simple side-channel attacks. Our results demonstrate that supersingular isogeny-based primitives such as SIDH and SIKE can indeed be deployed for embedded applications featuring competitive performance. For example, our smallest architecture based on a 128-bit MAC unit takes only 3415 slices, 21 BRAMs and 57 DSPs on a Virtex 7 690T and can perform key generation, encapsulation and decapsulation in 14.4, 24.4 and 26.0 milliseconds for SIKEp434 and in 52.3, 86.4 and 93.2 milliseconds for SIKEp751, respectively.
Bluetooth is a widely deployed platform for wireless communications between mobile devices. It uses authenticated Elliptic Curve Diffie-Hellman for its key exchange. We show that the authentication provided by the Bluetooth pairing protocols is insufficient and does not provide the promised MitM protection. We present a new Invalid Curve Attack that preserves the x-coordinate of the public keys. The attack compromises the encryption keys of all of the current Bluetooth authenticated pairing protocols, provided both paired devices are vulnerable. Specifically, it successfully compromises the encryption keys of 50% of the Bluetooth pairing attempts, while in the other 50% the pairing of the victims is terminated. Finally, we show that most of the Bluetooth market is vulnerable. To overcome this attack, the Bluetooth standard was revised, and dozens of vendors patched their implementations.
We will give the talk we gave at Real World Crypto. The talk is about deck functions, a new type of cryptographic primitive that has the ambition to push the block cipher from its throne of favourite building block.
In this talk I will present Friet-PC, a 384-bit cryptographic permutation designed to be extended using an error-correcting code. Then, I will discuss the fault-detection capabilities of the resulting permutation. Finally, I will argue that the cost of the added redundancy can be limited by carefully choosing the building blocks of the round function.
In this talk, I will present a tree-based approach to efficiently scan the space of high-probability differential trails in Keccak-$f$. This technique allows to scan the space of trails with weight per round of 15 and thus to improve bounds for the minimum weight of differential trails in Keccak-$f$ on 3, 4, 5 and 6 rounds.
I will then show some preliminary results obtained by applying the same techniques to bound the weight of linear trails.
In this talk I will make a case for implementing cryptographic algorithms in programming languages with strong type system guarantees.
In the first half I will recapitulate the problem of side-channel attacks and different approaches to prove an implementation side-channel silent, with some tradeoffs.
Following this overview, I will introduce Haskell as a uncommon, but possible cryptographic implementation language and an approach to have a single compiler produce Assembly code and prove that code to be constant time in its handling of secret values.
Changing execution semantics and compiler optimisations pose problems, which I will discuss briefly. I will present an example of a lazy evaluation side-channel which is not as apparent in a strict context and how specific Glasgow Haskell Compiler phases can be analysed.
We present SeLoC: a relational separation logic for verifying non-interference of fine-grained concurrent programs in a compositional way. SeLoC is more expressive than previous approaches, both in terms of the features of the target programming language, and in terms of the logic. The target programming language supports dynamically allocated references (pointers), higher-order functions, and fine-grained fork-based concurrency with low-level atomic operators like compare-and-set. The logic provides an invariant mechanism to establish protocols on data that is not protected by locks. This allows us to verify programs that were beyond the reach of previous approaches.
A key technical innovation in SeLoC is a relational version of weakest preconditions to track information flow using separation logic resources. On top of these weakest preconditions we build a type system-like abstraction, using invariants and logical relations. SeLoC has been mechanized on top of the Iris framework in the Coq proof assistant.
We investigate the accuracy of worst-case heuristic bounds on the noise growth in ring-based homomorphic encryption schemes. We use the methodology of Iliashenko (PhD thesis, 2019) to provide a new heuristic noise analysis for the BGV scheme. We demonstrate that for both the BGV and FV schemes, this approach gives tighter bounds than previous heuristic bounds, by as much as 10 bits of noise budget. Then, we provide experimental data on the noise growth of HElib and SEAL ciphertexts, in order to evaluate how well the heuristic bounds model the noise growth in practice. We find that, in spite of our improvements, there is still a gap between the heuristic estimate of the noise and the observed noise in practice. We extensively justify that the heuristic worst-case approach inherently leads to this gap, and hence leads to selecting significantly larger parameters than needed. We propose tightening this gap as an open problem, and suggest a possible solution. As an additional contribution, we update the comparison between the two schemes presented by Costache and Smart (CT-RSA, 2016). Using the new analysis, we show that FV has a slight advantage over BGV, even for large plaintext modulus, in contrast to the findings of Costache and Smart.
Joint work with Anamaria Costache and Kim Laine.
Secret sharing schemes are one of the essential mechanisms for safeguarding secret information and have found many applications in modern cryptographic protocols such as distributed computing, secure multiparty computations, threshold cryptography, attribute-based encryption, access control, generalized oblivious transfer and Byzantine agreement. We propose an N-mu-MDS codes based efficient generalized secret sharing scheme which is ideal and perfect and has the desirable security features of cheating detection and cheater identification.
This thesis is a collection of the work I have done at Radboud University between 2015 and 2019, under the supervision of Peter Schwabe and in collaboration with a wide range of coauthors.
The work consists of three separate chapters, discussing (specific schemes and optimization targets within the realms of) hash-based signatures, MQ-based signatures and lattice-based KEMs. While 'post-quantum cryptography' is the obvious commonality, the real focus of this work is on cryptographic engineering, and most of my contributions involve software optimization.
In recent years many organisations have implemented smart card authentication or other forms of two-factor authentication as a measure to reduce the risks associated with traditional password-based authentication. However, the low level details of underlying protocols in Windows networks such as Kerberos and NTLM introduce new security caveats when combined with smart cards. Although secure algorithms and protocols are used as building blocks, they can still yield insecure cryptographic systems when combined. In this talk I will demonstrate this via various practical attacks on the cryptographic systems that are used in modern Windows networks.
In this talk I will explain why the COHUBICOL ERC project on ‘innovation in legal method’ is hiring CS postdocs at iCIS
We now have a number of outrageous (?) claims regarding so-called ‘legal tech’. This regards (1) ’smart contracts’ and ’smart regulation’ (or ‘persistent script’ as Vitalik said he should have coined it), and (2) the use of machine learning to either mine argumentation lines rom relevant legal sources (legislation, case law, doctrine) or to predict legal judgements (based on NLP, voting behaviour of judges).
The COHUBICOL project aims to (1) investigate the CS assumptions on which such claims are based, (2) the extent to which such claims can be verified AND falsified, (3) to develop an assessment framework that enables lawyers to test and contest such claims. The role of the computer scientists will be to contribute to a proper understanding on the side of the legal team of what computing systems can and cannot deliver, such that the legal team learns to ask the right types of questions and is able to sufficiently understand the answers.
Disclaimer: if you don't like command line interfaces, this talk is not for you.
In this talk I will review more advanced features of Git; such as branching, merging, rebasing, resetting etc. I will also motivate some good practices.
Attribute-based encryption (ABE) is a type of public-key cryptography that associates key-pairs with attributes rather than identities. For this reason, many users can possess secret keys for one or more of the same attributes. Moreover, users might want to collude by pooling their secret keys together such that they might be able to decrypt ciphertexts that they cannot decrypt individually. This is not allowed in secure ABE. Because both the ciphertexts and the secret keys require strong security guarantees, designing provably secure ABE is difficult. As a result, several ABE schemes are broken despite having security proofs. In this talk, I will show how to make and break ABE, and how (crypt)analysis of an ABE scheme can be simplified by reducing schemes to an efficient and structured notation.
Recent years showed that machine learning can be a powerful paradigm for implementation attacks, especially profiling side-channel attacks (SCAs). Still, despite all the success, we are limited in our understanding when and how to select appropriate machine learning techniques. Additionally, the results we can obtain are empirical and valid for specific cases where generalization is often difficult. In this talk, we discuss several well-known machine learning techniques, the results obtained, and their limitations. Next, we concentrate on deep learning techniques and potential benefits such techniques can bring to SCA, with an emphasis on real-world scenarios. In the last part of the talk, we discuss how various AI techniques can be used for fault injection attacks.
In this talk, I will talk about a very "controversial" subject, one of Kyber's new parameter for the second round in NIST PQC.
More specifically, I will talk about how to perform modular reduction for the new Kyber's prime, aka 3329.
I will compare the method based on the original documentation with Montgomery reduction, and with pseudo Mersenne reduction.
I will show some results in hardware (FPGA and ASIC) of the two strategies, and even show that writing the same thing a little different can lead to interesting results.
In this talk, I'm going to present a memory-efficient high-speed implementation of Kyber for the ARM Cortex-M4 that was presented at Africacrypt 2019.
Furthermore, I am going to present recent results of the Cortex-M4 benchmarking project pqm4 which aims to cover all the NIST PQC candidates. pqm4 was presented last month at the Second NIST PQC Standardization Conference in Santa Barbara.
Deloitte's cyber security team consists of around 230 professionals, with people specializing in payment security, malware analysis, red teaming and much more. We will discuss the great variety of backgrounds that are present in our team and talk about many interesting projects that our colleagues work on.
Rank metric is a very promising research direction for code-based cryptography. In fact, thanks to the high complexity of generic decoding attacks against codes in this metric, it is possible to easily select parameters that yield very small data sizes. In this talk I will present the results from a recent paper where we analyze cryptosystems based on Low-Rank Parity-Check (LRPC) codes, one of the classes of codes that are efficiently decodable in the rank metric. We show how to exploit the decoding failure rate, which is an inherent feature of these codes, to devise a reaction attack aimed at recovering the private key. As a case study, we cryptanalyze the 1st round McNie submission to NIST’s Post-Quantum Standardization process. In the paper, we provide details of a simple implementation to validate our approach.
Research in symmetric cryptography in the last few years is mainly driven by dedicated high-profile open competitions such as NIST’s AES and SHA-3 selection procedures, or the ongoing NIST LWC initiative. While these focused competitions in symmetric cryptography are generally viewed as having provided a tremendous increase in the understanding and confidence in the security of these cryptographic primitives, the large number of submissions to such competitions reveal major problems related to analytical effort for the cryptographic community. Therefore, automatic tools are needed to assist the cryptanalysts in their work. In this talk, we will present an automatic tool/framework for differential cryptanalysis. Although the tool was originally designed for the analysis of hash functions, we think that it might be also of some interest for the analysis of other cryptographic primitives as for instance the NIST LWC candidates
OTRv4 is the newest version of the Off-The-Record protocol. As a Privacy Enhancing Technology, it is a protocol where the newest academic research intertwines with real-world implementations. This new version asks us to revisit definitions around deniability (participation, message, online and offline), forward and post-compromise secrecy, and how important they are to the world. In this talk, we will examine the key security properties that secure messaging protocols and applications should have. We will explore how these properties are achieved in OTRv4, in particular, and why they are needed in today’s world.
Modern blockchains, such as Ethereum, enable the execution of so-called smart contracts – programs that are executed across a decentralised network of nodes. As smart contracts become more popular and carry more value, they become more of an interesting target for attackers. In the past few years, several smart contracts have been found to be vulnerable and thus exploited by attackers. However, a new trend towards a more proactive approach seems to be on the rise, where attackers do not search for vulnerable contracts anymore. Instead, they try to lure their victims into traps by deploying vulnerable-looking contracts that contain hidden traps. This new type of contracts is commonly referred to as honeypots. In this paper, we present the first systematic analysis of honeypot smart contracts, by investigating their prevalence, behaviour and impact on the Ethereum blockchain. We develop a taxonomy of honeypot techniques and use this to build HONEYBADGER – a tool that employs symbolic execution and well defined heuristics to expose honeypots. We perform a large-scale analysis on more than 2 million smart contracts and show that our tool not only achieves high precision, but also high scalability. We identify 690 honeypot smart contracts as well as 240 victims in the wild, with an accumulated profit of more than $90,000 for the honeypot creators. Our manual validation shows that 87% of the reported contracts are indeed honeypots.
Qbit Cyber Security, originating from ITsec, the oldest ICT security organisation in the Netherlands is specialised in performing Security Assessments. In the context of the ICT landscape changing to an environment where everything is "connected", Qbit is developing an IoT testing lab environment for the purpose of testing IoT devices. Part of the IoT lab is a cooperation with organisation such as the Consumentenbond and several applied universities. During the presentation we will explain who we are, what we do, how we can work together and will show anonymised cases and examples as seen in practice.
In this talk the speaker will introduce how cryptography is managed within ABN AMRO bank. The speaker will also address current crypto-related challenges in the financial sector.
NLX is an open source inter-organisational system facilitating federated authentication, secure connecting and protocolling in a large-scale, dynamic API landscape.” Governmental organizations in The Netherlands find themselves in the unique position where many API providers and consumers from many organizations have to interact in n-m relationships. Current mechanisms to use API’s from external organizations come with cumbersome bureaucratic processes. With NLX we aim to eliminate man-in-the-middle constructions, decreasing administrative overhead. We aim to optimize for speed and a high level of security.
By using the NLX network all API calls are logged in a way that enables the data provider to become GDPR compliant, e.g. by giving insight in data usage.
As this is a non-commercial, Open Source initiative started by local governments, we’d love to confront the work in progress with your crowd and see if it survives.
This doctoral thesis is a mathematical study of quantum computing, concentrating on two related, but independent topics. First up are dilations, covered in chapter 2. In chapter 3 "diamond, and then, dagger" we turn to the second topic: effectus theory. Both chapters, or rather parts, can be read separately and feature a comprehensive introduction of their own.
In this dissertation we study the category of completely positive normal contractive maps between von Neumann algebras. It includes an extensive introduction to the basic theory of $C^∗$-algebras and von Neumann algebras.
In this talk, I will be introducing what are string diagrams and how to do arithmetic (just natural numbers) with string diagrams. The talk assumes no deep understanding of mathematics, but you need to know how to count. If time permits, I may say a little on how string diagrams are used for quantum circuits.
Side channel attacks target the implementation of crypto algorithms. As the target of a side channel attack is implementation, evaluations can be done only when the product is finished. As a developer, however, you want to know if your implementation is vulnerable early during the development cycle and not just before your product is ready to launch. The first step in a typical SCA evaluation is the acquisition of power traces using an oscilloscope. An SCA simulator creates traces similar to the one collected with an oscilloscope, and in theory, could be used during development. In this talk, I will go over the different classes of simulators, their advantages, and disadvantages and will highlight the challenges of fairly comparing them.
When fingerprinting is discussed, a lot of different terminology is used. Studies use different terms, are not always clear about the studied features and propose methods which lack a fundamental approach. There is an understanding about the concept of fingerprinting, but it lacks a fundamental way to reason about it. We talk about a fingerprint surface and propose a taxonomy to use when discussing this topic. We applied our taxonomy to important studies over multiple years and reason about counter measures. We argue that the fingerprint surface is a fundamentally hard to problem, as it is hard to be complete in every category. We identified the browser property category that can be complete and introduce Prop-test to return the entire fingerprint surface for this category.
In this talk I will discuss cryptanalytic applications of Grobner Basis algorithms
and some possible directions of improvements.
Filter generators are vulnerable to several attacks which have led to well-known design criteria on the Boolean filtering function (mainly Algebraic Immunity and Nonlinearity). However, Rønjom and Cid have observed that a change of the primitive root defining the LFSR leads to several equivalent generators. They usually offer different security levels since they involve filtering functions of the form $F(x^k)$ where $k$ is coprime to $(2^{n-1})$ and $n$ denotes the LFSR length.
We prove that this monomial equivalence does not affect the resistance of the generator against algebraic attacks, but usually impacts the resistance to correlation attacks.
Most importantly, a more efficient attack can often be mounted by considering non-bijective monomial mappings. In this setting, a divide-and-conquer strategy applies based on a search within a multiplicative subgroup of $F_{2^n}^*$. Moreover, if the LFSR length n is not a prime, a fast correlation attack involving a shorter LFSR can be performed.
This attack is generic and uses the decomposition in the multiplicative subgroups of $F_{2^n}^*$, leading to new design criteria for Boolean functions used in Cryptography.
Encrypted devices have become part of our day to day lives and it is very important to protect such devices against malicious users. Physical attacks are a common threat to the security of embedded devices. A malicious user can decipher secret information by measuring the time that an algorithm takes to execute, the power consumption, the electromagnetic emanations or other physical quantities that could form a meaningful physical leak.
Louiza Papachristodoulou’s PhD research centred on the security of cryptographic algorithms with public keys, and specifically on Elliptic Curve Cryptography (ECC). ECC is generally used for implementing asymmetric cryptographic protocols in integrated systems. In her thesis, Papachristodoulou presents a powerful, new attack method called Online Template Attacks which has broad applicability in various ECC implementations. Further to this, she presents and evaluates algorithmic and numerical countermeasures that provide new insights for safe implementations.
I will present some highlights from a report that he wrote recently for the council of Europe about ‘Discrimination, artificial intelligence, and algorithmic decision-making’.
AI-driven decisions can have far-reaching effects. The police can use AI to predict who will commit crimes. Banks can use AI to decide whether to give somebody a mortgage. AI decisions can seem rational and unbiased. But, unfortunately, AI can have discriminatory effects, for instance when AI systems learn from biased human decisions. The report examines which legal protection exists in Europe against AI-driven discrimination, and how that protection could be improved.
The report can be found here.
In this talk, we will show how to apply the techniques for trail bounds published by Mella, S., Daemen, J., & Van Assche, G. to the Noekeon block cipher.
This method consists of identifying independent regions of the state of the cipher with regards to its round function, allowing for efficient pruning of the search space.
Using this method, we are able to reproduce past results for the original 128 bits variant of Noekeon and generate new differential trail bounds for a 64-bit variant.
Passive implementation attacks such as Differential Power Analysis (DPA) are powerful tools for extracting secrets from cryptographic implementations that only require the attacker to observe a device's power consumption during normal operation. In this talk I will revisit this topic, both from the defender's and the attacker's point of view.
First, I will present ISAP v2.0, a lightweight and DPA secure authenticated encryption mode that was submitted by colleagues and myself to the NIST competition for lightweight cryptography.
In the second part I present work on single-trace attacks, i.e., attacks that can recover secrets even if the attacker is restricted to observing only one power trace of a cryptographic computation. As a concrete example I will show how such an attack can be used to extract a private key from a masked RLWE-based encryption scheme.
In order to execute a (logical) quantum circuit on a physical quantum computer, the original circuit needs to be adjusted such that the used gates are present in the architecture of the quantum computer. In this talk, an approach to routing circuits consisting of only CNOT gates (parity maps) is presented. In short, the circuit is first condensed to its matrix representation and an equivalent circuit is extracted from the matrix such that it obeys the given architecture. This is done with a constrained Gaussian elimination that uses Steiner trees and a given Hamiltonian path over the architecture. This method can be optimized with a genetic algorithm by searching for an optimal mapping from logical qubits to their physical location in the architecture. The resulting circuits use less CNOT gates than the compiled circuits obtained from the Quil compiler.
Cogent is a high-level functional language that reduces the cost of writing and formally verifying efficient systems code. In this talk, I'll give a brief introduction of the language, and talk about our current work, on supporting data layout descriptions which enable customising memory layouts of Cogent algebraic datatypes. This extension allows programmers to write more code in Cogent and to integrate Cogent code with existing C programs (e.g. the Linux kernel) more seamlessly.
In 2016, NIST put out a call for proposals for digital signature schemes and key exchange methods that can hold up against an adversary that has a quantum computer. In November of 2017, a large collection of such schemes was submitted for consideration, based on a wild variety new and existing mathematical structures. For this work, we look at the key exchange schemes that make use of multiplication over $Z_{2^m}[x]$, and optimize this operation on the Cortex M4. In this talk, I will describe how we optimize small polynomial multiplications on the M4, review larger multiplication using Karatsuba's and Toom-Cook's methods, and discuss our design-space exploration towards finding the optimal multiplication methods for the various schemes.
A wave of alternative coins that can be effectively mined without specialized hardware, and a surge in cryptocurrencies’ market value has led to the development of cryptocurrency mining (cryptomining) services, such as Coinhive, which can be easily integrated into websites to monetize the computational power of their visitors. While legitimate website operators are exploring these services as an alternative to advertisements, they have also drawn the attention of cybercriminals: drive-by mining (also known as cryptojacking) is a new web-based attack, in which an infected website secretly executes JavaScript code and/or a WebAssembly module in the user’s browser to mine cryptocurrencies without their consent.
In this talk, I will elaborate on the comprehensive analysis we performed on Alexa’s Top 1 Million websites to shed light on the prevalence and profitability of this attack. We study the websites affected by drive-by mining to understand the techniques being used to evade detection, and the latest web technologies being exploited to efficiently mine cryptocurrency. As a result of our study, which covers 28 Coinhive-like services that are widely being used by drive-by mining websites, we identified 20 active cryptomining campaigns.
Furthermore, motivated by our findings, we investigate possible countermeasures against this type of attack. I will discuss how current blacklisting approaches and heuristics based on CPU usage are insufficient, and present MineSweeper, a novel detection technique that is based on the intrinsic characteristics of cryptomining code, and, thus, is resilient to obfuscation.
Side-channel attacks (SCA) are perpetrated by a third party during a communication and take advantage of information leaks, from a device or an algorithm, to infer secret data. One powerful type of SCA is template attack (TA) and, as other types profiling attacks, relies on the principle : Evaluate a trend from a known data leaks to create models characterizing it, and applying the models on an unknown leak to identify it.
While this method have been designed to be used for SCA purpose, it remains a probabilistic approach to solve an automation problem of classification. This problem is frequently solved in data science with different methods (e.g. SVM, RF). Those methods show good results on many applications similar to SCA, of which I will present some resolution approaches to SCA solving by data scientist methods.
The community that studies functions over $GF(2)$ and $GF(2^n)$ does not stop at the binary case, but also studies functions over $GF(P)$. The work I do here during my research internship focuses on linear cryptography over $GF(P)$, trying to capitalize on the large amount of theoretical results over $GF(P)$.
I will briefly explain how I implemented various bit-sliced operations over $GF(3)$, $GF(5)$ and $GF(7)$ and how I modified the Walsh-Hadamard transform to work over $GF(P)$.
Then, the remainder of the talk will be about using differential cryptanalysis to find collisions in Message Authentication Codes.
In order to find collisions one needs to find a trail with a high differential probability. I will show how evolutionary algorithms can be applied to find such a trail.
I will introduce a semantic framework for modeling speculative/out-of-order execution, and show how this framework can be used to reason about several variants of Spectre.
I will also propose a method of using symbolic analysis based on this model to check cryptographic code for potential Spectre vulnerabilities.
In this Digital Security lunch I walk talk neither about security nor about digital systems. Instead I will give a computer scientist introduction to quantum computing. I will discuss why you should care about quantum computers (spoiler: it is not because it breaks RSA), how you do actual computations and what the challenges are in getting an actual practical scale quantum computer to work. This discussion will then converge to the topic that Aleks and I have been working on, namely optimizing quantum circuits. Using a completely different method than the competition, we are able to make circuits more efficient than the current state-of-the-art.
Continuation of Carlo's previous talk with a demo on SSDs.
As an evolution of traditional industrial control systems, cyber-physical industrial systems combine feedback control theory with computing and communication capabilities. Attacks against these critical systems shall be handled both in terms of safety and security. In doing so, we propose a challenge-response detector, based on control-theoretic watermarks, that handles integrity attacks. The detector is robust against adversaries that are able to acquire knowledge of the system's parameters, through the network, to compromise the physical behavior.
We have analyzed the hardware full-disk encryption implementation of several Self-Encrypting Drives (SEDs) from Samsung and Crucial (Micron) by reverse engineering their firmwares. The vendors combined cover a majority of the market share of SEDs sold today.
In theory, the security guarantees offered by hardware encryption are similar to those of software implementations. In reality, we found that many hardware implementations have critical security weaknesses, for many models allowing for complete recovery of the data without knowledge of any secret.
BitLocker, the encryption software built into Microsoft Windows will rely exclusively on hardware full-disk encryption if the drive advertises supported for it. Thus, for these drives, data protected by BitLocker is also compromised.
This challenges the view that full-disk encryption implemented in hardware is preferable over software. We conclude that one should not rely solely on hardware encryption offered by SEDs.
public announcement
In this paper, we explore traffic analysis attacks on Tor that are conducted solely with middle relays rather than with relays from the entry or exit positions. We create a methodology to apply novel Tor circuit and website fingerprinting from middle relays to detect onion service usage; that is, we are able to identify websites with hidden network addresses by their traffic patterns. We also carry out the first privacy-preserving popularity measurement of a single social networking website hosted as an onion service by deploying our novel circuit and website fingerprinting techniques in the wild. Our results show: (i) that the middle position enables wide-scale monitoring and measurement not possible from a comparable resource deployment in other relay positions, (ii) that traffic fingerprinting techniques are as effective from the middle relay position as prior works show from a guard relay, and (iii) that an adversary can use our fingerprinting methodology to discover the popularity of onion services, or as a filter to target specific nodes in the network, such as particular guard relays.
More and more parts of the internet are hidden behind a login field. This poses a barrier to any study predicated on scanning the internet. Moreover, the authentication process itself may be a weak point. To study authentication weaknesses at scale, automated login capabilities are needed. In this work we introduce Shepherd, a scanning framework to automatically log in on websites. The Shepherd framework enables us to perform large-scale scans of post-login aspects of websites.
Shepherd scans a website for login fields, attempts to submit credentials and evaluates whether login was successful. We illustrate Shepherd’s ca pabilities by means of a scan for session hijacking susceptibility. In this study, we use a set of unverified website credentials, some of which will be invalid. Using this set, Shepherd is able to fully automatically log in and verify that it is indeed logged in on 6,273 unknown sites, or 12.4% of the test set. We found that from our (biased) test set, 2,579 sites, i.e., 41.4%, are vulnerable to simple session hijacking attacks.
This talk presents the first practical collision attack on 5-round SHA3-256. This is obtained by constructing a 2-round connector which links the initial state with a 3-round differential trail. The collision of 5-round SHA3-256 is possible due to an improved algorithm for constructing connectors and a dedicated algorithm for searching differential trails.
LangSec provides good insights in the root causes of security vulnerabilities (which are typically in input handling) and in software engineering approaches to tackle these root causes.
LangSec efforts focus on eridating parsing bugs, but there is a second class of bugs where better parsing does not help: the class of forwarding bugs, which are due to unwanted instead of buggy parsing. We'll discuss this class of bugs, its root causes, and ways to tackle these (notably using types).
With the possible uprise of large quantum computers, the standardization procedure by NIST for post-quantum protocols has begun. A protocol I am a contributer to is SIKE, which is unique for being based on the problem of finding isogenies between supersingular elliptic curves. In this talk I will give a brief introduction to the protocol, and then describe the latest developments with respect to its classical security. This has been the subject of a 3-month summer internship at Microsoft Research.
It is a well-known platitude that correlation does not imply causation. Indeed the decline of piracy seems to have little causal effect on climate change, in spite of a strong correlation. However, in seminal work due to Pearl, Spirtes, Lauritzen, and others in the early 90s, it was shown (perhaps surprisingly) that under certain circumstances, it is possible to draw causal conclusions purely from observed statistical correlations. Since then, the field of statistical causal inference has given us a big bag of techniques for discovering and measuring causal effects, even in the presence of hidden confounders and selection bias. Indeed, quite a few of our friends downstairs in DaS are working on just that.
In this talk, I will give a high-level perspective on causal discovery and inference problems, using the language of process theories. A process theory gives us the bear minimum structure we need to talk about "black boxes" and the fundamental operation of "plugging boxes together" into diagrams of boxes and wires. From there, we can introduce a couple of extra ingredients which enable us to do calculations directly on the diagrams. After introducing some ingredients which are especially helpful for causal inference, I will work through a simple, classic example. Namely, I will show that smoking causes cancer.
In the field of Internet of Things, self-powered devices provide a sustainable alternative to battery powered devices, but they suffer from frequent power loss due to lack of input power. Power loss is handled differently based on the computing model. Conventional computing systems, which do not consider power loss in their computing model, respond to a power loss by losing all the volatile state and by restarting the device upon the next power-up. Whereas intermittent computing systems, which are designed to handle frequent power loses, prevent restarting by storing a snapshot of the device state as a checkpoint in non-volatile memory. Upon the next power-up, the device is restored with the most recent checkpoint and resumes application execution. All the security sensitive data from both the system and application state are stored in a checkpoint. We show that an attacker with access to the non-volatile memory can identify the checkpoints and the vulnerable information stored in them. They have the ability to view, tamper and replay checkpoints without alerting the device.
To overcome these vulnerabilities, we designed the Secure Intermittent Computing Protocol, which provides the following security features to the checkpoints of an intermittent system. First, it provides basic information security to checkpoints. Second, it introduces uniqueness to the checkpoints to detect checkpoint replay. Third, the checkpoints are chained to preserve the order of checkpoints. As the protocol is designed for intermittent systems, it is atomic in design to protect checkpoint generation and restoration process form power loss.
The network paradigm Opportunistic Networking (OppNet) is extremely useful when source and destination nodes are intermittently connected. Examples of this include scenarios where there is no network infrastructure such as emergency scenarios, disconnected ad-hoc networks or more recently, device-to-device vehicular communications.
In OppNet, messages are being created and forwarded from one node to another if found. Otherwise, the message has to patiently wait until it is forwarded and finally delivered (or not) to its final destination.
In this talk, I will present several published studies where cryptographic tools such as homomorphic encryption, identity-based cryptography and fair exchange signature schemes are used to improve forwarding and delivery decisions in OppNet.
This talk will be about statistical fault attacks (SFA) and statistical ineffective fault attacks (SIFA). I will cover the basic working principle and idea of these attacks and show possible applications. In particular, we will see how to attack implementations protected with both, masking and detection-based fault countermeasures, using a single fault induction per execution.
This talk will be an attempt to summarize the basics of differential cryptanalysis and explain how to search for a cluster of differential trails.
We will also see an application to Xoodoo, a new permutation which design is inspired by Keccak.
Fingerprinting of browsers has been thoroughly investigated. In contrast, mobile phone applications offer a far wider array of attributes for profiling, yet fingerprinting practices on this platform have hardly received attention.
In this paper, we present the first (to our knowledge) investigation of Android libraries by commercial fingerprinters. Interestingly enough, there is a marked difference with fingerprinting desktop browsers. We did not find evidence of typical fingerprinting techniques such as canvas fingerprinting. Secondly, we searched for behaviour resembling that of commercial fingerprinters. We performed a detailed analysis of six similar libraries. Thirdly, we investigated ∼30,000 apps and found that roughly 19% of these apps is using one of the these libraries. Finally, we checked how often these libraries were used by apps subject to the Children’s Online Privacy Protection Act (i.e. apps targeted explicitly at children), and found that these libraries were included 21 times.
joint work with Jean Paul Degabriele
(presented at RWC'18 and published at Eurocrypt'18).
Tor is a primary tool for maintaining anonymity online. It provides a low-latency, circuit-based, bidirectional secure channel between two parties through a network of onion routers, with the aim of obscuring exactly who is talking to whom, even to adversaries controlling part of the network. Tor relies heavily on cryptographic techniques, yet its onion encryption scheme is susceptible to tagging attacks (Fu and Ling, 2009), which allow an active adversary controlling the first and last node of a circuit to deanonymize with near-certainty. This contrasts with less active traffic correlation attacks, where the same adversary can at best deanonymize with high probability. The Tor project has been actively looking to defend against tagging attacks and its most concrete alternative is proposal 261, which specifies a new onion encryption scheme based on a variable-input-length tweakable cipher.
We provide a formal treatment of low-latency, circuit-based onion encryption, relaxed to the unidirectional setting, by expanding existing secure channel notions to the new setting and introducing
circuit hiding to capture the anonymity aspect of Tor. We demonstrate that circuit hiding prevents tagging attacks and show proposal 261’s relay protocol is circuit hiding and thus resistant against tagging attacks.
CSIDH (Commutative Supersingular Isogeny Diffie--Hellman) is a postquantum cryptosystem, which, as its name suggests, is intended as a drop-in replacement for classic Diffie--Hellman in practice. In theory, it is a concrete instantiation of Couveignes "Hard Homogeneous Spaces" key exchange. While HHS key exchange has a clear algebraic resemblance to Diffie--Hellman on the surface, but the deeper you go, the weirder it gets.
In this talk we will explore some of the crucial differences between DH, ECDH, and CSIDH, while avoiding too much depth and weirdness.
MORUS is a high-performance authenticated encryption algorithm submitted to the CAESAR competition, and recently selected as a finalist.
There are three versions of MORUS: MORUS-640 with a 128-bit key, and MORUS-1280 with 128-bit or 256-bit keys.
For all versions the security claim for confidentiality matches the key size.
We present a linear correlation in the keystream of full MORUS, which can be used to distinguish its output from random and to recover some plaintext bits in the broadcast setting.
We present TLBleed, a novel side-channel attack that leaks information out of Translation Lookaside Buffers (TLBs). TLBleed shows a reliable side channel without relying on the CPU data or instruction caches. This therefore bypasses several proposed CPU cache side-channel protections. Our TLBleed exploit successfully leaks a 256-bit EdDSA key from libgcrypt (used in e.g. GPG) with a 98% success rate after just a single observation of signing operation on a co-resident hyperthread and just 17 seconds of analysis time. We use novel machine learning techniques to achieve this level of performance.
In this talk, I will introduce how hardware multipliers are designed.
Then, I will compare on how this is related to a special case: FPGAs.
The talk is very high level, therefore a lot of details will not be
mentioned or even talk about.
Certification of keys and attributes is in practice typically realized by a hierarchy of issuers. Revealing the full chain of issuers for certificate verification, however, can be a privacy issue since it can leak sensitive information about the issuer's organizational structure or about the certificate owner. Delegatable anonymous credentials solve this problem and allow one to hide the full delegation (issuance) chain, providing privacy during both delegation and presentation of certificates. However, the existing delegatable credentials schemes are not efficient enough for practical use. In this paper, we present the first hierarchical (or delegatable) anonymous credential system that is practical. To this end, we provide a surprisingly simple ideal functionality for delegatable credentials and present a generic construction that we prove secure in the UC model. We then give a concrete instantiation using a recent pairing-based signature scheme by Groth and describe a number of optimizations and efficiency improvements that can be made when implementing our concrete scheme.
Sovrin is a global public utility purpose-built for self-sovereign identity. I'll discuss the principles behind self-sovereign identity and why it's important. I'll talk through some of the past and current challenges and opportunities the contributors faced while designing and implementing the Sovrin network. I'll touch on scaling, privacy, credentials, security, on and off-ledger interactions, distributed key management, the consensus protocol, governance, and interoperability. I'll also talk a bit about cryptographic choices and challenges.
Mixing layers are responsible for diffusion in (symmetric) cryptographic primitives and are thus essential to withstand known attacks. In this talk I will explain why MixColumns in AES does what it does, and I will cover some of the research that has been done to construct more efficient mixing layers. I will also highlight column parity mixers as an alternative to MDS matrices and I will show how they can be used to design a cryptographic permutation.
Cyber security is an important pillar of the digital single market strategy of the European Union. Cyber threats undermine the confidence of consumers. This lack of trust prevents them from fully utilizing the possibilities of the online market. To foster the required confidence, the strategy is aimed at creating a level playing field with a high level of cyber security and consumer and personal data protection. A high level of security can only be reached when private parties are also stimulated to ensure cyber security. However, the actors on the digital market are not sufficiently incentivized as long as no legal security obligations towards the consumers exist. For this reason, the (implementation of) existing and proposed law of the European Union imposes several private law cyber security duties. Still, gaps continue to exist. The security of the digital market depends on many different actors. Even under the proposed rules, not all of these parties have a legal obligation to ensure the cyber security of their service or product. In this paper, I answer the following research question: To what extent does the ‘Digital Single Market’ impose consistent private law cyber security obligations on the providers of goods and services? I show that the duty to ensure cyber security depends on criteria that are, to some extent, arbitrary. Although the existing and proposed Directives and Regulations create a significant degree of harmonization, their approach remains piecemeal. They do not cover all relevant actors and activities. Furthermore, there are important differences between the existing duties to ensure cyber security. These gaps and differences encourage strategic behaviour by the actors on the digital market. They allow them to avoid the legal duties.
This is a contribution to the panel discussion at CPDP conference that will focus on the issues around the notion of private law liability (injunctions and/or compensation) with regard to data protection violations. If data subjects start a procedure, they will in principle bear the burden of proof that harm is caused, raising the question of what constitutes a data protection harm, as tort law requires a concrete (material or immaterial) harm. Furthermore, a violation of data protection law, and causality between such violation and the harm, must be established. This may be burdensome, given that (adverse) effects for data subjects are difficult to assess.
In elliptic-curve cryptography there is a classical difference between instantiations of protocols used for key exchange (Diffie-Hellman) and authentication (Schnorr-style signatures). Whereas for the first we can rely on x-only arithmetic (eg. Curve25519), the latter requires a full group structure (eg. Ed25519). In this talk I present analogs of the group-based identification and signature schemes for the x-line of elliptic curves, which allows for very small and simple implementations without too much loss of efficiency. It also applies more generally to a Kummer variety of any hyperelliptic curve. This work was presented at Asiacrypt'17.
The keyed duplex construction was introduced by Bertoni et al. (SAC 2011) and recently generalized to full-state absorption by Mennink et al. (ASIACRYPT 2015). We present a generalization of the full-state keyed duplex that natively supports multiple instances by design, and perform a security analysis that improves over that of Mennink et al. in terms of a more modular security analysis and a stronger and more adaptive security bound. Via the introduction of an additional parameter to the analysis, our bound demonstrates a significant security improvement in case of nonce-respecting adversaries. Furthermore, by supporting multiple instances by design, instead of adapting the security model to it, we manage to derive a security bound that is largely independent of the number of instances.
Prime Dutch cyber security results often do little for the Dutch economy, instead they mainly benefit the dominant American ICT industry players. This limits research budgets. This presentation contains a proposal to address this through a public-private collaboration between Philips and cyber security research institutes.
If the worst-case time/space complexity of an algorithm is significantly higher than the respective average case, the algorithm may be vulnerable to a Denial-of-Service attack. In this talk, novel ideas are discussed to detect such security vulnerabilities using a combination of grey-box fuzzing and dynamic symbolic execution. I will present the tool Kelinci, which is an interface to run the popular grey-box fuzzer AFL on Java programs. Kelinci has identified bugs in Apache Commons Imaging and JDK versions 6-9. An extended version, KelinciWCA, adds a Worst-Case Analysis by collecting resource usage information and directing the fuzzer towards more costly program behaviors. It can achieve significant slow-downs on classic algorithms, quickly identify hash collisions and find regular expressions leading to exponential matching time. Finally, I will discuss how this powerful approach can be further improved by bringing in dynamic symbolic execution to alleviate fuzzing's biggest weakness.
White box cryptography aims to perform cryptography in software, and to do it in such a way that an attacker with full control over the system cannot recover the used key. A number of attacks from the hardware world have been shown to be effective on white box cryptography. In particular DFA and DPA. These attacks work very much the same as their hardware counterparts with a few notable differences. The computations for DPA in this case are not based on power measurements but on values and addresses in the memory or registers of the target during the execution of the crypto operation. A notable problem is the amount of data to work with in these scenarios (up to a few gigabyte per encryption). This talk will discuss how to deal effectively with these amounts of data. We will present three ways in which the performance of DPA attacks on whiteboxes can increased by several orders of magnitude. We will describe a fast solution to removing duplicate samples, a solution to choosing the correct samples to perform the attack and a novel method for visualizing white box executions. This novel visualization method can also be used to enhance DFA attacks and provides useful control flow information which aids in reverse engineering.
E-voting literature has long recognised the threat of denial-of-service attacks: as attacks that (partially) disrupt the services needed to run the voting system. Such attacks violate availability. Thankfully, they are typically easily detected. We identify and investigate a denial-of-service attack on a voter’s spam filters, which is not so easily detected: reverse Bayesian poisoning, an attack that lets the attacker silently suppress mails from the voting system. Reverse Bayesian poisoning can disenfranchise voters in voting systems which rely on emails for essential communication (such as voter invitation or credential distribution). The attacker stealthily trains the voter’s spam filter by sending spam mails crafted to include keywords from genuine mails from the voting system. To test the potential effect of reverse Bayesian poisoning, we took keywords from the Helios voting system’s email templates and poisoned the Bogofilter spam filter using these keywords. Then we tested how genuine Helios mails are classified. Our experiments show that reverse Bayesian poisoning can easily suppress genuine emails from the Helios voting system.
Fuzzy Logic is a valuable tool for performing logical computations on truth values that lie between True ans False. However, Fuzzy Logic lacks expressivity. It is not able to distinguish between fuzziness that originates from lack of data and fuzziness that originates from contradictory data. Subjective Logic has an additional dimension: uncertainty due to lack of data. An algebra has been developed for "opinions" in this logic, as well as rules for combining opinions and forming chains of opinions. The original rule for chaining opinions was defective. In this talk we explain the problem and how it was resolved. We have introduced a new chaining rule which simply puts a weight on pieces of evidence. We show how the new algebra enables opinion computations for arbitrary trust networks. (http://arxiv.org/abs/1402.3319)
In this lunch talk, our new PhD students, Paulus and Alexandru will talk about their previous and current research for about 5 minutes each.
Industrial Control Systems are under increased scrutiny. Their security is historically sub-par, and although measures are being taken by the manufacturers to remedy this, the large installed base of legacy systems cannot easily be updated with state-of-the-art security measures. We propose a system that uses electromagnetic side-channel measurements to detect behavioural changes of the software running on industrial control systems. To demonstrate the feasibility of this method, we show it is possible to profile and distinguish between even small changes in programs on Siemens S7-317 PLCs, using methods from cryptographic side-channel analysis. In this short talk, I will present the main points of our paper on a non-technical level, in preparation of presenting it at CRITIS2017 in Lucca. If desired, I can then discuss implementation, technical details, and other questions you may have.
This paper presents Gimli, a 384-bit permutation designed to achieve high security with high performance across a broad range of platforms, including 64-bit Intel/AMD server CPUs, 64-bit and 32-bit ARM smartphone CPUs, 32-bit ARM microcontrollers, 8-bit AVR microcontrollers, FPGAs, ASICs without side-channel protection, and ASICs with side-channel protection.
In tomorrow's lunch colloquium, Jair Santanna, a Ph.D. candidate at University of Twente, will talk about how DDoS attacks evolved over the last half decade. Jair will focus his presentation on revealing findings of public Websites that offer DDoS attacks as a paid service, called booters and stressers. Large network security companies (e.g., Akamai and Incapsula) have pointed these websites as the primary reason for the increase of attacks (both in occurrences and in power). Jair will talk about what he has observed over four years of research in this field and discuss future directions to address the problem.
The distinguishing feature of the Internet of Things is that many devices get interconnected. The threat of side-channel attacks in this setting is less understood than the threat of traditional network and software exploitation attacks that are perceived to be more powerful. A particular question for a designer of IoT devices and protocols is whether network layer security mechanisms should feature protections against side-channel analysis, such as Differential Power or Electromagnetic Attacks. With this question in mind, we perform a multidisciplinary case study of Thread, an emerging network and transport level stack designed to facilitate secure communication between heterogeneous IoT devices.
We perform the first side-channel vulnerability analysis of the Thread networking stack. We leverage various network mechanisms to trigger manipulations of the security material or to get access to the network credentials. We choose the best attack vector we identified to build a complete attack that combines network specific mechanisms and Differential Electromagnetic Analysis. When successfully applied on a Thread network, the attack gives full network access to the adversary. We evaluate the feasibility of our attack using a network of nodes running OpenThread, an open-source implementation of the stack. Then, we suggest a range of countermeasures to prevent our attack and the other attack vectors we identified during the vulnerability analysis. Our experience shows that resistance to side-channel attacks should be carefully considered in designing and implementing protocols for the IoT.
Differential power analysis attacks have been around in open research for almost twenty years. The field is nowadays rich in methods and techniques, but not in state-of-the-art tools. In view of a very experimental nature of side channel attacks, this is somewhat suprising. I would like to discuss this issue and present an experimental high performance open-source toolkit for DPA written in Julia.
It is well known that a holder of a passport can use the Active Authentication protocol to sign messages. This usage is known as Remote Document Authentication (RDA). That is, one can effectively use an e-passport as a PKI smartcard to authenticate its holder to an external party. In this talk I will explain how a passport can also cater for encryption by (ab)use of the Chip Authentication protocol. With Remote Document Encryption (RDE) any external party can extract a public key from an e-passport bound to the user identity, allowing data encryption that can only be decrypted by the holder using it e-passport. We can also introduce the notion of an RDE Extraction PIN bound to the passport, effectively providing the same security as a regular PIN. To use RDE the user needs an RFID card reader connected to its computer or an NFC enabled mobile device. The latter is no longer limited to Android based devices as Apple recently announced opening its NFC interface. See https://gototags.com/blog/apple-ios-11-supports-reading-nfc-tags-iphone-7-iphone-8-core-nfc/. Possible RDE applications include secure email, compartmentalization of personal data within portals, secure data storage on NFC devices and cloud encryption. The results also apply to (Dutch) identity cards and electronic driver licences. RDE ironically suggests that carrying a passport when traveling abroad might violate export or import laws on strong cryptography. Details on: https://arxiv.org/abs/1704.05647
Research stems from education. Computer science and mathematics are constantly interacting, one needs the other. What do we want from students in terms of skills and knowledge when they arrive to the university and when they leave? When I was involved in teaching maths-related courses at our department such as Security, Cryptography, Calculus and Probability, I was startled. Encountering mathematics, students were happy and successful at following cookbook-like recipes. However, just a very tiny minority of them was able to do something sensible with unexpected problems. Later I figured out the reason: mathematics for students is computation rather than thinking. Making interviews with students and PhD applicants, studying state-of-the-art didactics of mathematics and talking with experts, I see that mathematics education is at the brink of a big change. In this talk I will discuss some of my exciting experiences from my last one year and a half in this field.
During lunch, we are going to digitally rob a bank. Not for profit, but as a friendly match against the security team of this bank. We believe that only perfect practice makes perfect: by simulating an attack as realistically as possible, an organisation is optimally prepared for real incidents. Expect a very practical presentation with demos of modern hacker techniques combined with lessons learned from security teams in defending against targeted attacks.
"Why did you bring a cake? Is it your birthday today?". "No, I just like to bake". In this talk, I will open up about my hobby of baking cakes. I will share lots of cake pictures and talk about the process of baking, reading recipes, making some small changes and other tips.
Elliptic Curve Cryptography operations rely heavily on the strong security of scalar multiplication. However, this operation is vulnerable to side channel (SCA) and fault injection (FA) attacks. The use of alternative arithmetic systems like Residue Number System (RNS) for all scalar multiplication underline operations has been proposed as an efficient countermeasure approach for the above mentioned attacks. In RNS, a number is represented as a set of smaller numbers, where each one is the result of the modular reduction with a given moduli basis. Under certain requirements, a number can be uniquely transformed from the integers to the RNS domain (and vice versa) and all arithmetic operations can be performed in RNS. This representation provides an inherent SCA and FA resistance to many attacks and can be further enhanced by additional RNS arithmetic manipulations or more traditional algorithmic countermeasures. In this presentation, I am going to present a secure RNS based Montgomery Power Ladder based scalar multiplication algorithm, which is implemented on an ARM Cortex A7 processor (Raspberry Pi 2B). The SCA-FA resistance of the implementation is evaluated by collecting preliminary leakage trace results that validate our initial assumptions. This is a joint work with Apostolos Fournaris and Nicolas Sklavos during my research visit in the university of Patras and is going to be presented in SEMS 2017 in Paris.
SEGRID is a collaboration project, funded by the EU under the FP7 program. SEGRID partners are DSO’s Manufacturers, knowledge institutions and universities. SEGRID’s main objective is to enhance the protection of smart grids against cyber-attacks. In this presentation an overview will be given on the storyline, use cases and research activities in SEGRID, including risk analysis and management, development of security vulnerability assessment, discovery and diagnosis tools, novel security solutions such as resilient SCADA systems, SDN based resilient communication infrastructure, robustness and scalable D(T)LS-based communication, and privacy by design. For more information, see https://segrid.eu/ .
The Learning Parity with Noise (LPN) problem is a fundamental problem from learning theory, that in recent years has attracted the attention of the cryptographic community, as a hardness assumption underlying several new cryptographic primitives. Tightly connected to the problem of decoding random linear codes, it is believed to be resistant to quantum attacks, and thus is very interesting for designing post-quantum secure cryptosystems. There are various approaches for solving the LPN problem, and the most interesting are variants of the BKW algorithm, that employs a reduce and solve techniques. Recently, at ASIACRYPT '14, Guo et al. introduced a novel reduction technique relying on the covering properties of linear binary codes. While the technique is very powerful, the question of constructing codes with good covering properties and efficient decoding remained open. Several subsequent papers address this question, and currently the best know results, reported by Bogos & Vaudenay, are achieved using direct sums of small codes. In this work, we propose a family of codes equipped with an efficient decoding algorithm, that can be used in the reduction technique by Guo et al. The proposed codes are much closer to random codes than previous solutions, which intuitively implies better covering properties. As a consequence, our algorithm, unlike previous proposals, is better suited for larger LPN instances and outputs sub-optimal solutions, as a trade-off to efficiency. The initial results, both theoretical and experimental, show improvements compared to the best known results for all LPN instances with secret length at least 532.
Imagine there's no block ciphers, it's easy if you try:-) The SHA-3 competition has revealed that a fixed-length permutation is an excellent building block for hashing by means of the sponge. By including a key in the input this can readily be used for message authentication (MAC) and by exploiting the arbitrarily long sponge output for stream encryption. The duplex variant of sponge widens the spectrum to, among other, authenticated encryption and reseedable pseudorandom generation. Up to a few years ago, it was widely believed that, for the same level of security, block-cipher-based modes would be more efficient than permutation-based modes. This picture has recently changed thanks to new strong generic security bounds for a keyed duplex variant that allows full-state absorbing. However, the sponge/duplex modes have the disadvantage that they are inherently serial and exploiting parallelism requires building an additional mode layer on top. We address this concern with Farfalle, a new construction that is a parallel keyed sponge variant. Its structure strongly relaxes the cryptographic requirements for the underlying permutation in comparison with keyed sponge or Even-Mansour and hence it has great potential for high-speed crypto. Farfalle builds a pseudorandom function (PRF) with arbitrary-length input and output that can readily be used for stream encryption and MAC. We realize session-based authenticated encryption, synthentic IV authentication encryption and a wide block cipher by the application of some amazingly simple PRF-based modes. In the talk, I will give an overview of these recent innovations in permutation-based crypto.
In 2016 I was invited to teach a course on Cyber Security at Innopolis University, which is a university that was founded only a few years ago in Kazan in Russia. Because Russia happens to be one of my most favorite countries I accepted the offer to come work for them as a visiting lecturer. In the talk I'll discuss my experiences with this university, the students, the course and some logistics.
At USENIX Security 2016, Alkim, Ducas, Pöppelmann and Schwabe presented the "NewHope" Ring-LWE based key exchange protocol. The paper received some attention, if nothing else because USENIX and Facebook awarded the "Internet Defense Prize" for the paper and because Google used the algorithm in a post-quantum experiment for TLS. In my talk I will briefly review the NewHope protocol and then present Kyber, which can be seen as a successor to NewHope that improves communication complexity and security
properties.
The acquisition of volatile memory may be non-trivial during an incident response scenario. A worst case scenario may force an investigator into executing a cold boot attack. During this presentation I will discuss a (forensically sound) proof of concept acquisition method. Together with a justification for the choices that have led to its development. Finally, as only scrambled data is stored in the actual memory modules, I present my attempt at reverse engineering the memory scrambler included in Ivy Bridge generation Intel processors.
A physically unclonable function, or PUF, is some physical structure with properties that are easy to verify, hard to predict, and practically impossible to clone. Ideally, this means it's a device-unique unchanging identifier, which can be used for improving security. However, it can be at odds with privacy and anonymity. This talk will give you an overview of the thirty years of history behind PUFs, and will include the most recent advances in research. The functions, structure, and design will be discussed, as well as devices and materials that have properties to base PUFs on.
After having been proposed in 2014, there has been a lot of attention recently for cryptography based on hard problems related to isogeny graphs of supersingular elliptic curves. Many of the techniques used are related to the ones we rely on in elliptic-curve cryptography, but in general things become quite a bit more complex. This talk will focus on introducing the ideas behind supersingular-isogeny-based cryptography
with only assuming very little background knowledge. It will be a very high-level approach, hoping to give you some intuition and spark your interest for the subject!
Software systems can be vulnerable to algorithmic complexity attacks if certain input causes them to consume excessive amounts of CPU time or space. If an adversary is able to efficiently construct such inputs, (s)he can potentially deny service to benevolent users. I will present an integrated symbolic execution approach for space-time analysis of Java programs, which can detect such algorithmic complexity based vulnerabilities. Our tool-set contains a pre-processing component that uses static analysis and visualization to help developers identify potentially vulnerable parts of the code for further analysis. A symbolic execution component implements Worst-Case Analysis to detect algorithmic complexity vulnerabilities, parametrized with respect to cost models for space-time consumption. The Java verification tool JayHorn, based on Constrained Horn Clauses, is used to check the inferred bounds. The effectiveness of our approach is demonstrated on a set of challenging use cases on which we identified many space-time vulnerabilities.
One of the most interesting strong PUF constructions is the Bistable-Ring PUF (BR PUF), which is derived from the Möbius Ring-Oscillator. This is due to the lack of a correct mathematical model describing its internal behavior. Despite the absence of a precise model we will show that the BR PUF can be strongly PAC learned. Towards this, we will use deep results from the field of Spectral Analysis of Boolean Functions (≈ modern Percolation Theory). We show that the surprising and very low Average-Sensitivity of a BR PUF is the crucial insight into its PAC learnability. Additionally, we will visualize our results through intensive practical experiments on real BR PUFs. Time permitting we aim to discuss a very recent insight on the existence of a single highly influential variable for every BR PUF. I.e., every BR PUF of length n has a single variable approximating the given BR PUF with a constant error bounded away from ½.
During his master thesis Timmy Weerwag proved that the operations in TweetNaCl were correct with respect to the elliptic curve specifications (Montgomery ladder...) However he did not provide any proof of the datatypes implementations (the basic operations: addition, multiplication etc). This work (_still in progress_) aims to show fill this gap.
I will start with a short intro on power analysis and fault injection. Then I will introduce the single instruction skip attack model. My work is focused on compiler defenses for this attack model. I will try to give a brief overview on the state of the art defenses, both software and hardware. I will introduce instruction duplication as a defense. I will show some limitations of the software defenses and explain overhead of such defense. Finally, I will try to introduce a research question that I want to focus on in near future.
This talk will describe MQDSS, a digital signature scheme based on the problem of solving a multivariate system of quadratic equations (i.e. the `MQ problem'). By discussing the Fiat-Shamir transform, an identification scheme and the MQ problem, we will see how the scheme is constructed. We then briefly look at MQDSS-31-64, a concrete instance of the scheme that provides post-quantum security.
Power and clock glitch attacks on smart cards can help an attacker to discover some internal secrets or bypass certain security checks. Also, an attacker can manipulate the temperature and supply voltage of the device, thus making the device glitch more easily. If these manipulations are within the device operating conditions, it becomes harder to distinguish between an extreme condition from an attacker. To demonstrate temperature and power supply effects on fault attacks, we perform several tests on an Atmega 163 microcontroller in different conditions. Our results show that these kinds of attacks are still a serious threat to small devices, whilst maintaining the manufacturer recommendations.
Keccak is a cryptographic primitive. Using Keccak in keyed mode data can be encrypted. During an encryption process information leaks through the power consumption of the device that performs the encryption. This leakage can sometimes be exploited using statistical analysis. Work has been done on an attack on a theoretical model that we tried to verify using an implementation on an FPGA. We also tried to create a more efficient attack to reduce the amount of traces that are required to successfully obtain the correct key.
So this talk will probably be attractive to the theoretical, functional programming and correctness oriented people, as Liquid Types (in full Logically Qualified Data Types) are a system that combines Hindley-Milner type inference with predicate abstraction, in order to infer dependent type (for safety properties).
Identity federation is a way of authenticating to service providers that is being used by many people. For example by all students and staff of Dutch universities, when they use SURFconext. Since personal details are being exchanged between parties in such a system, privacy is an important factor that currently isn't always protected as well as it can be. Polymorphic pseudonyms can be used to address these problems. In this talk we'll explore how this works and what the advantages of polymorphic pseudonymization are. This talk is based on the master thesis Polymorphic Pseudonymization in Educational Identity Federations, for which I received the Joop Bautz Information Security Award.
Service providers are often reluctant to support anonymous access, because this makes it hard to deal with misbehaving users. Anonymous blacklisting and reputation systems can help prevent misbehaving users from causing more damage. However, by the time the user is blocked or has lost reputation, most of the damage has already been done. To help the service provider to recover from abuse by malicious anonymous users, I'll present the vote-to-link system. In the vote-to-link system, moderators (rather than a single trusted third party) can cast votes on a user's action if they deem it to be bad. After enough moderators have voted on the action, the service provider can use these votes to link all the actions by the same user within a limited time frame and thus recover from these actions. All the user's actions in other time frames, however, remain unlinkable. To protect the voting moderators from retaliation, we also propose a (less efficient) variant that allows moderators to vote anonymously. We implemented and evaluated both variants to show that they are practical. In particular, we believe this system is suitable to combat malicious Wikipedia editing.
Millions of people make use of free and open-source software. The infrastructure of the internet relies on it, as do the servers of large and small organisations. Without FOSS, the internet as we know it would not exist. In this talk Felix Stegerman, Deputy Coordinator Netherlands of the Free Software Foundation Europe and bachelor student CS, will give a quick overview of the history and definition of FOSS and its relation to fundamental freedoms, vendor dependence, privacy and autonomy. He will discuss its relevance to individuals, the public sector, academia, and businesses; including how using Free Software in a commercial context relates to business models. Felix will point to the differences between the traditional "copyleft" and modern popular free software licenses and will underline their advantages and disadvantages in a number of use cases.
in healthcare.
This talk gives an introduction to the "PEP" project that recently received 750K funding from the province of Gelderland to develop a security infrastructure for personalised healthcare. Background can be found in the whitepaper (which is already slightly outdated): http://eprint.iacr.org/2016/411
Cryptographic protocols need rigorous security proofs, to ensure that a certain notion of security is achieved. Often, security is defined by a set of security games. An alternative way to define security is by an ideal functionality, this is called the simulation security paradigm. In this talk, I will give a short introduction into simulation based security and show some advantages over game-based security definitions.
Online tracking of users is used for benign goals, such as detecting fraudulous logins, but also to invade user privacy. We posit that for non-oppressed users, tracking within one website does not have a substantial negative impact on privacy, while it enables legitimate benefits. In contrast, cross-domain tracking negatively impacts user privacy, while being of little benefit to the user. Existing methods to counter tracking treat any and all tracking similar: client-side storage is blocked, or all sites are fed random characteristics to hamper re-identification. We develop a tool, FP-Block, that counters cross-domain tracking, while allowing intra-domain tracking. For each visited site, FP-Block generates a unique, consistent fingerprint: a "web identity". This web identity is then used for any interaction with that site, including for third-party embedded content. This ensures that ubiquitously embedded parties will see different, unrelatable fingerprints from different sites, and can thus no longer track the user across the web.
In this talk we discuss a class of unconventional side-channel analysis, namely location-based attacks. We demonstrate the applicability of those attacks in ARM Cortex M4 processors and discuss possible attack modeling options via information theory.
In order to assess the security of cryptographic permutations and block ciphers, a lower bound on the weight of differential and linear) trails is desirable. For some primitives this requires an algorithm scanning the space of trails up to some bound. This algorithm involves multiple and complex iterators. My work was to provide a formal approach to verify the program soundness with respect to the bound expectations. In order to achieve that goal, two different approaches have been tested and are presented. This led to the construction of an abstract iterator. The correctness of this iterator relies on the soundness of the tree definition.
We describe a largely automated and systematic analysis of TLS implementations by what we call ‘protocol state fuzzing’: we use state machine learning to infer state machines from protocol implementations, using only blackbox testing, and then inspect the inferred state machines to look for spurious behaviour which might be an indication of flaws in the program logic. For detecting the presence of spurious behaviour the approach is almost fully automatic: we automatically obtain state machines and any spurious behaviour is then trivial to see. Detecting whether the spurious behaviour introduces exploitable security weaknesses does require manual investigation. Still, we take the point of view that any spurious functionality in a security protocol implementation is dangerous and should be removed. We analysed both server- and client-side implementations with a test harness that supports several key exchange algorithms and the option of client certificate authentication. We show that this approach can catch an interesting class of implementation flaws that is apparently common in security protocol implementations: in three of the TLS implementations analysed new security flaws were found (in GnuTLS, the Java Secure Socket Extension, and OpenSSL). This shows that protocol state fuzzing is a useful technique to systematically analyse security protocol implementations. As our analysis of different TLS implementations resulted in different and unique state machines for each one, the technique can also be used for fingerprinting TLS implementations.
The European Railway Traffic Management System (ERTMS) is a European standard for next-generation train management and signalling. It is intended to make it easier for trains to cross borders and optimise the running of the railway. In this talk I will introduce the system and present the security analysis we performed at the University of Birmingham. Using ProVerif, we analysed the authentication protocol used in the communication between the train and back-end. Furthermore we analysed the MAC algorithm used for this communication channel and discovered there is a possibility to forge messages, such that the corresponding MAC is the same as for a previously observed (valid) message.
The Internet of Things (IoT) is a ubiquitous system that incorporates not only the current Internet of computers, but also smart objects and sensors. IoT technologies often rely on centralised architectures that follow the current business models. This makes efficient data collection and processing possible, which can be beneficial from a business perspective, but has many ramifications for users privacy. As communication within the IoT happens among many devices from various contexts, they need to authenticate each other to know that they talk to the intended party. Authentication, typically including identification, is the proof of identity information. However, transactions linked to the same identifier are traceable, and ultimately make people also traceable, hence their privacy is threatened. We propose a framework to counter this problem. We argue, that by applying attribute-based (AB) authentication in the context of IoT one empowers users to maintain control over disclosure of data by owned devices. At the same time AB authentication allows us achieve data minimisation and unlinkability between users transactions. Altogether this enables an important improvement of privacy in the IoT.
An elliptic curve addition law is said to be complete if it correctly computes the sum of any two points in the elliptic curve group. One of the main reasons for the increased popularity of Edwards curves in the ECC community is that they can allow a complete group law that is also relatively efficient (e.g., when compared to all known addition laws on Edwards curves). Such complete addition formulas can simplify the task of an ECC implementer and, at the same time, can greatly reduce the potential vulnerabilities of a cryptosystem. Unfortunately, until now, complete addition laws that are relatively efficient have only been proposed on curves of composite order and have thus been incompatible with all of the currently standardized prime order curves. I present optimized addition formulas that are complete on every prime order short Weierstrass curve defined over a field k with char(k) not 2 or 3. Compared to their incomplete counterparts, these formulas require a larger number of field additions, but interestingly require fewer field multiplications.
As most of you know, we have been working on the IRMA project for a couple of years now. While initially the focus was to provide attribute based authentication using smart cards, we have since shifted to a more phone oriented approach. This dramatically improves usability, but also causes new and interesting challenges.
In this lunch talk I will briefly introduce IRMA, show some demos explaining the current setup, and discuss with you some of the new challenges we face.
White-box cryptography is the discipline of implementing a cryptographic algorithm is software, such that it is hard for an attacker to extract the key. The attacker is thereby assumed to have full access to and full control over the execution environment. A few years ago, white-box crypto was mainly used for DRM. However, since Host Card Emulation (HCE) has been introduced in Android, it is also increasingly used for payment. In this presentation, we discuss what white-box crypto is, how it is typically done, and what the state-of-the-art is in attacking these implementations.
In this talk, Aleks will give a brief intro to the "diagrammatic approach" to quantum theory, whereby one uses diagrams of quantum states, and processes acting on those states, to study some of the more surprising features of the physical world. Along the way, he will introduce quantum teleportation, prove that it works, and show that the key "trick" is the same as one-time pad crypto, where we simply substitute some types.
Tweakable blockciphers have faced an immense increase in popularity lately, in part due to their use in authenticated encryption schemes. A well-established approach to tweakable blockcipher design is via masking, where a certain primitive (a blockcipher or a permutation) is preceded and followed by an easy-to-compute tweak-dependent mask. In this talk, we investigate the use of tweakable blockciphers in authenticated encryption and revisit the principle of masking. Using recent advancements in the computation of discrete logarithms, we develop a highly efficient, constant time masking technique. The new masking allows for authenticated encryption in about 0.55 cycles per byte, significantly outperforming its closest competitors.
In this talk we discuss how to use evolutionary computation (EC) in cryptology. We start with a short introduction on EC and then briefly elaborate on similarities and differences between EC and machine learning methods. Since cryptology includes a plethora of difficult and non-deterministic problems it is obvious that evolutionary computation can be employed there. In fact, EC methods have been successfully used in cryptology for more than 20 years. In accordance with that, we present a number of EC applications in crypto area and possible research directions.
This talk will show that it is feasible to implement the stateless hash-based signature scheme SPHINCS-256 on an embedded microprocessor with memory even smaller than a signature and limited computing power. We demonstrate that it is possible to generate and verify the 41KB signature on an ARM Cortex M3 that only has 16KB of memory available, and compare it to the stateful alternative by implementing XMSS^MT on the same platform. This talk partially overlaps with the talk I gave when presenting my Master's thesis, but also contains additions from the work we did afterwards.
Privacy is a hot topic. Not only because privacy-enhancing technologies (PETs) are exciting, but also because the world is constantly transforming now. This lunch colloquium attempts to put PETs in a bit broader context. It provides a brief overview about privacy and how technologies affect it. To make the discussion even more interesting, I will show surprising
demonstrations and share with you some of my experiences about the new Privacy and Identity course finished just now.
The upcoming General Data Protection Regulation is quickly becoming of great concern to organisations processing personal data of European citizens. To translate these legal requirements into privacy friendly designs for systems that process personal data is however a nontrivial task. One recently proposed approach to make privacy by design more concrete in practice is privacy design strategies. This paper uses the findings of an extensive literature review, in particular a catalogue of surveyed privacy patterns, to improve strategy definitions and introduce an additional level of abstraction between strategies and patterns: ‘tactics’. It formally defines these tactics, and explores the relationships between concepts which help bridge the gap between legal requirements and system development.
Last year Carlo Meijer and Roel Verdult published a new attack "Ciphertext-only Cryptanalysis on Hardened Mifare Classic Cards". This card-only attack works on recent Mifare cards that implement an improved random number generator. Ronny will present a refinement of the attack that needs fewer traces and less search time.
How can you provably minimize your implementation? In this work, we explore the feasibility of applying SAT solvers to optimizing implementations of small functions such as S-boxes for multiple optimization criteria, e.g., the number of nonlinear gates and the number of gates. We provide optimized implementations for the S-boxes used by several candidates in the CAESAR competition. We then suggest a new method to optimize for circuit depth and we made (or we are about to make) tooling publicly available to find efficient implementations for several criteria. Furthermore, we illustrate with the 5-bit S-box of PRIMATEs how multiple optimization criteria can be combined.
Recently, we proposed a Ring-LWE-based key-exchange protocol called "Newhope" and illustrated that this protocol is very efficient on large Intel processors. Our paper also claims that the parameter choices enable efficient implementations on small embedded processors. In this follow up work, we show that these claims are actually correct and present Newhope software for the ARM Cortex-M family of 32-bit microcontrollers. More specifically, our software targets the low-end Cortex-M0 and the high-end Cortex-M4 processor from this family. Our software starts from the C reference implementation and then carefully optimizes subroutines in assembly. As a result, the server side of the key exchange executes in only 1594889 cycles on the M0 and only 1626589 cycles on the M4; the client side executes in 1840024 cycles on the M0 and 1791200 cycles on the M4.
This talk about the use of simulators in case of side channel attacks. We will argue that the field of side channel analysis needs simulators. We will go through different types of simulators, theirs strengths and their weaknesses while trying to sketch the type of an attacker (or a researcher) that might have access to resources and knowledge needed in order to use such simulators. We will also talk about cases when a simulator might be more convenient than a real experiment as well as limitations or simulations compared to real life experiments.
We use elementary results in number theory to investigate the number of trials needed by Shor's algorithm to factor an integer, with a view to minimising the amount of experimental physics work required. In particular, we show that one call to the quantum subroutine is enough to factor strong RSA keys. This implies the somewhat surprising result that while strong RSA moduli are considered the hardest case for classical factoring algorithms, they are in fact the easiest case for quantum factoring. This is a progress report on joint work with François Morain, Thomas
Lawson, and Frédéric Grosshans.
The Internet is an amazing success story, connecting hundreds of millions of
users. However, in the last decade, there has been a growing realization that the current Internet Protocol is reaching the limits of its senescence. In fact, the way people access and utilize it has changed radically since the 1970-s when its architecture was conceived. This has prompted several research efforts that aim to design potential next-generation Internet architectures. In particular, Content-Centric Networking (CCN) is an emerging networking paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. CCN focuses on content distribution, which is arguably not well served by IP. Named-Data Networking (NDN) is an example of CCN. NDN is also an active research project under the NSF Future Internet Architectures (FIA) program. FIA emphasizes security and privacy from the outset and by design. To be a viable Internet architecture, NDN must be resilient against current and emerging threats. In this talk, we discuss the main security and privacy issues we identified in NDN.
This talk will consist of two parts. The first part is a report of an experiment that we recently conducted, while the second part focuses on a new potential research direction for which your input would be appreciated!
About the first part (based on the abstract of our submitted paper): Security and usability improvements in online banking are often made in academic proposals. Testing of these proposals could provide vital information for designing new systems and for proposing further improvements. A modular evaluation framework, presented as a virtual bank, could provide a common ground for testing and reduces the overhead of setting up experiments. We propose such a framework for testing secure usability in online banking, since it does not exist to our knowledge. To validate that it would provide useful information, we created a first proof of concept to measure secure usability user behavior with two different authentication methods in an experiment. The results confirm that online bank users pay more attention to security actions after they noticed that they have been attacked. We were also able to conclude that of two tested authentication methods, one is significantly faster in use compared to the other. These results validate that the envisioned framework will be able to provide useful information. What we learned from the proof of concept will be used in the development of a modular evaluation framework, which we will release in the near future as open source for others to experiment with.
About the second part: In customer to bank communications, banks often employee a transaction authorization scheme known as 'What You See Is What You Sign' (WYSIWYS). In this scheme, the user sends information over an insecure channel to the bank, after which the bank sends the received information back over a secure channel to the user for verification. Last year, we proposed an alternative named 'What You Enter Is What You Sign (WYEIWYS, also presented during a DS Lunch). In this proposed scheme, the information provided by users is initially sent to the bank over a secure channel, which makes it unnecessary for users to verify the information afterwards on correctness.
In the second part of the talk I will briefly explain the principles of WYSIWYS and WYEIWYS (for those who missed that particular DS Lunch) before I will tell more about an envisioned WYEIWYS implementation. For the latter I request your insights. The idea is very fresh and a first proof of concept is still in development. Questions from my side will be related to the use of cryptography (is this secure in theory?) and whether you can suggest other improvements or alternative ideas which I simply did not think of.
Side-channel attacks on RSA aim at recovering the secret exponent by processing multiple power or electromagnetic traces. The exponent blinding is the main countermeasure which avoids the application of classical forms of side-channel attacks, like SPA, DPA, CPA and template attacks. Horizontal attacks overcome RSA countermeasures by attacking single traces. However, the processing of a single trace is limited by the amount of information and the leakage assessment using labeled samples is not possible due to the exponent blinding countermeasure.
In order to overcome these drawbacks, we propose a side-channel attack framework based on a semi-parametric approach that combines the concepts of unsupervised learning, horizontal attacks, maximum likelihood estimation and template attacks in order to recover the exponent bits. Our method is divided in two main parts: learning and attacking phases. The learning phase consists of identifying the class parameters contained in the power traces representing the loop of the exponentiation. We propose a leakage assessment based on unsupervised learning to identify points of interest in a blinded exponentiation. The attacking phase executes a horizontal attack based on clustering algorithms to provide labeled information. Furthermore, it computes confidence probabilities for all exponent bits. These probabilities indicate how much our semi-parametric approach is able to learn about the class parameters from the side-channel information.
An attribute-based signature (ABS) is a special digital signature created using a dynamic set of issued attributes. For instance, a doctor can sign a medical statement with his name, medical license number and medical specialty. These attributes can be verified along with the signature by any verifier with the correct public keys of the respective attribute issuers. This functionality not only makes ABS a much more flexible alternative to the standard PKI-based signatures, but also offers the ability to create privacy-preserving signatures. However, none of the ABS constructions presented in the literature is practical or easily realizable. In fact, to the best of our knowledge,
there is currently no ABS implementation used in practice anywhere. This is why we put forward a new ABS technique based on the IRMA attribute-based authentication. IRMA already has an efficient and practical smart-card implementation, and an experimental smart-phone
implementation too. They are currently used in several pilot projects. In the paper that was recently presented at the SPACE conference, we propose an ABS scheme based on the existing IRMA technology, extending the currently available IRMA devices with ABS functionality. We study the practical issues that arise due to the introduction of the signature functionality to an existing attribute-based authentication scheme, and we propose possible cryptographic and infrastructural solutions. We also discuss use cases and implementation aspects.
An attribute-based credential scheme is a set of cryptographic protocols in which an issuer can grant a credential to a user, who can then show it to other parties. The credential contains several attributes; i.e., pieces of data, generally statements about the owner of the credential. In such a scheme, the user can selectively show some of the attributes contained in his credential, while hiding the others. The scheme should be unforgeable (that is, only the issuer can create new credentials), and ideally, it should not be possible to tell if two transactions in which the same attributes were disclosed did or did not originate from the same credential (this is called unlinkability).
A number of such attribute-based credential schemes exist. However, finding schemes that satisfy all of the above demands while still being sufficiently efficient for implementation on smart cards is something of a challenge. In this talk we present a new attribute-based credential scheme that is provably unforgeable, unlinkable, and suitable for smart cards, based on bilinear pairings on elliptic curves and an intractability assumption called the Known Exponent Assumption.
Mobile devices are increasingly used in security sensitive contexts such as physical access control and authorisation of payment transactions. In this work we contribute a mechanism to verify whether a mobile device currently resides within a geographical area at a given time, thus enabling the use of the location as an additional authentication factor.
Trustworthiness, privacy, and practicability are central to our mechanism. In particular, to provide trustworthy location information, our mechanism uses the location of the phone as detected by the Mobile Network Operator instead of relying on the location detected by the phone itself, which can be manipulated. We have followed a privacy-by-design approach to ensure that sensitive information, e.g., location and subscriber data, are only revealed to parties with a need to know. Privacy safeguards are realized using anonymous credentials, an established privacy-enhancing technology. Finally, our mechanism is practical and has little requirements on the mobile phone beyond the ability to run computations on anonymous credentials, as well as Internet and mobile network connectivity. These requirements are fulfilled by most smart phones in the market.
After a two-year delay because of legal issues Roel, Barıs and Flavio’s paper "Dismantling Megamos Crypto: Wirelessly Lockpicking a Vehicle Immobilizer" was presented at Usenix last month. This now also gives me the opportunity to present a different attack I developed on Megamos Crypto.
Despite a series of attacks, Mifare Classic is still the world's most
widely deployed contactless smartcard on the market. The Classic uses a
proprietary stream cipher Crypto1 to provide confidentiality and
authentication. However, once the cipher was reverse engineered, many
serious vulnerabilities surfaced and a number of attacks were proposed.
The most severe key recovery attacks only require wireless interaction
with a card. These card-only attacks are considered the most serious
threat, since it allows for avoiding camera detection, which is often
present at an access control or public transport gate.
Consequently, system intregrators started to deploy "fixed" Mifare
Classic cards, wherein some issues are fixed while remaining backwards
compatible. This averts all card-only attacks currently proposed in the
literature. However, these countermeasures are rather palliating and
inadequate for an insecure cipher such as Crypto1. In support of this
proposition we present a novel card-only attack, independent of fixable
implementation flaws. Hence, in order to avoid this attack, backwards
compatibility is inherently broken and all cards and readers must be
upgraded.
The main challenge of any decentralized untrusted cryptocurrency network is reaching consensus amongst its participants on the validity of transactions. Besides reaching consensus, many other issues exist within decentralized cryptocurrencies, including power consumption, DDoS attacks and tattoos. Reaching consensus is achieved by proof-of-methods (dubbed consensus mechanisms), for example proof-of-work in Bitcoin, or proof-of-stake in Peercoin. Also, many other consensus mechanisms exist, including a blockchainless consensus mechanism.
This talk will provide an overview of consensus mechanisms, their types, variants, differences and the issues they solve in decentralized cryptocurrencies.
All asymmetric cryptography that is widely deployed today can be broken in polynomial time by a large quantum computer, once such a computer exists. Post-quantum cryptography proposes alternatives that, as far as we know, will not be efficiently broken by a large quantum computer. One of the best understood post-quantum cryptographic constructions are hash-based signatures. Their security relies solely on certain standard properties of a cryptographic hash-function, they are resonably efficient and both signatures and keys are short. The big problem is that they are stateful, i.e., the secret key needs to be updated after each signature. This poses various problems for practical
deployment when considering backups, load balancing, or even typical APIs for signatures. In this talk I will present SPHINCS as the first efficient stateless hash-based signature scheme.
The Android platform uses a permission model that allows users and developers to regulate access to private information and system resources required by applications. These permissions have proven to be useful for inferring behaviours and characteristics of an application. Existing work have demonstrated several techniques to study ‘required’ permissions; however, little attention has been paid towards ‘used’ permissions.
In this talk, I will present a pattern mining algorithm which can be applied to identify contrasting permission patterns that aim to detect the difference between clean and malicious applications. Our empirical results suggest that the permission patterns can capture key differences between clean and malicious applications, which can assist in characterising these two types of applications during malware triage.
Cryptographic implementations are vulnerable to side channel analysis. To prevent this, masking schemes randomly split the sensitive values into d+1 shares, where d is the order of the scheme. However, using a dth order masking scheme such as the Ishai, Sahai and Wagner scheme (ISW) to secure a non-linear operation, requires an amount of random bits witch is quadratic with respect to the order d. In this work, we use two approaches to resolve this problem. First, we suggest a new higher-order masking scheme that can merge linear and non-linear operations in order to eliminate the quadratic randomness requirement. Second, we demonstrate how an imperfect, yet realistic adversarial model can lead to a relaxed notion of security that enables masking schemes with reduced randomness requirements.
One of the major challenges in the design and verification of manycore systems is cache coherency. In bus-based architectures, this is a well-studied problem. When replacing the bus by a communication network, however, new problems arise. Cross-layer deadlocks can occur even when the protocol and the network are both deadlock-free when considered in isolation. To counter this problem, we propose a methodology for deriving cross-layer invariants. These invariants relate the state of the protocols run by the cores to the state of the communication network.
We show how they can be used to prove the absence of cross-layer deadlocks. Our approach is generally applicable and shows good scalability.
Template attacks are a special kind of side-channel attacks that work in two stages.
In a first stage the attacker builds up a database of template traces collected from a device similar to the target device. In the second stage, traces from the target device are compared to the template traces
to recover the secret key. In the context of attacking elliptic-curve scalar multiplication with template attacks, one can interleave template generation and template matching and reduce
the amount of template traces. In this presentation, we are going to show this technique by defining and applying the concept of online template attacks, a general attack technique with minimal assumptions for an attacker, who has very limited control over the template device.
When processes run in parallel, the total number of states in a system grows exponentially; this is typically referred to as the state space explosion problem.
The holy grail in model checking is finding a way in which we can effectively cope with this state space explosion. In this talk I discuss a line of research in which we use an equational fixed point logic and parity games as a basis for verification. I will focus on the latter, and discuss ideas that can be used to reduce these parity games. For those wondering whether this general theory has any applications at all I discuss industrial applications of this technology. If you want to know what these applications are, come listen to my talk!
Multi-core processors and Systems-on-Chips are composed of a large number of processing and memory elements interconnected by complex communication fabrics. These fabrics are large systems made of many queues and distributed control logic. Recent studies have demonstrated that high levels models of these networks are either tractable for verification or can provide key invariants to improve hardware model checkers. Formally verifying Register Trans- fer Level (RTL) designs of these networks is an important challenge, yet still open. This work bridges the gap between high level models and RTL designs. We propose an algorithm that from a Verilog description automatically produces its corresponding micro-architectural model. The extracted model is transfer equivalent to the original RTL circuit.
I will present our ongoing work on relaxing the assumptions made for side channel analysis.
I will try to summarize our recent results without going into too much technical details.
Also I'll say a few words about recent masking techniques and their shortcomings that we have spotted so far.
Energy is becoming a key resource for IT systems. It can be essential for the success of a system under development to be able to analyse and optimise its resource consumption. Recently, a hardware parametric approach has been proposed [joint work with Marko and Rody], that involves creating energy models for hardware components and passing the hardware models as a parameter to the analysis of the software that controls the system. This analysis uses an energy aware Hoare logic. However, the Hoare logic approach suffers both from lack of compositionality and from significant overshoot in the analyse bounds. For large IT systems compositionality is a key property for an analysis to be applicable in practice. This paper proposes a hardware parametric dependent type system that offers great advantages over the earlier Hoare logic approach. Not only does it achieve compositionality through the use of function signatures but it also has the potential to yield more precise bounds than the Hoare logic approach.
This presentation will detail the ongoing work on the use of Residue Number Systems (RNS) in the computation of asymmetrical encryption algorithms and the secure usage of partial reconfigurable devices.
The ability of RNS to represent large integer value as a set of smaller independent values, allows for the parallel and carry free computation of arithmetic operations. However the conversion cost associated with RNS can significantly impact the performance of the resulting systems. The usage of RNS in the implementation of asymmetrical encryption algorithms, the developed RNS coprocessor, and the obtained results will be detailed.
Additionally, the concept of partial dynamic reconfiguration of FPGA devices and a solution to improve the security of this reconfiguration process will also be discussed.
Input languages, which describe the set of valid inputs an application has to handle, play a central role in language-theoretic security, in recognition of the fact that overly complex, sloppily specified, or incorrectly implemented input languages are the root cause of many security vulnerabilities.
I'll present the ideas behind language-theoretic security to then discuss how our research using protocol state machines fits in with this.
There are two important algorithms for a quantum computer: Shor’s and Grover’s algorithm. Shor’s algorithm is the most famous: it factors primes and solves discrete log in cubic time, completely breaking most common asymmetric ciphers like RSA, ECC and El-Gamal. The field of post quantum cryptography searches for asymmetric ciphers unaffected by Shor’s algorithm.
Less spectacular, but unavoidable, is Grover’s algorithm. It allows a quantum computer to find a preimage of a $n$-bit hash with just $2^(n/2)$ work. The countermeasure is easy but annoying: double the bits-of-security of every hash function and symmetric cipher. In this talk I will give an accessible explanation of Grover’s algorithm. No prior knowledge required.
The methods of the ISO/IEC 20248 Digital Siganture Meta Structure standard is discussed. This standard provides a compact offline verifiable data structure for RFID and Barcodes. It is also applicable for IOT and M2M. The concept of offline verifiable documents as a key function in providing trust services and accountability is demonstrated at the hand of a title deed and vehicle license plate examples. The structure of the data in a barcode and RFID for optimum read performance is also demonstrated.
Albertus Pretorius represents South Africa at ISO/IEC JTC1 SC31 Automated Identification as expert and Head of Delegation. His focus in the standards work is on UHF Air Protocol, Data Structures and Security with an emphasis on performance meaurement methods. He has 20 years of experience in applied cryptography, specifically in the domain of Public Key Technologies. He is the editor, and key driver, behind the SANS 1368 and now ISO/IEC 20248 Digital Siganture Meta Structure standard. This standard provides a compact offline verifiable data structure for RFID and Barcodes. It is also applicable for IOT and M2M.
Asymmetric cryptographic primitives are essential to enable secure communications in public networks or public mediums. Such primitives can be deployed as software libraries or hardware coprocessors, the latter being more commonly employed in Systems on Chip (SoC) scenarios, embedded devices, or application-specific servers. Unfortunately, the most commonly available solutions, based on RSA or Elliptic Curve Cryptography (ECC), are highly processing-intensive due to the underlying extended-precision modular arithmetic. Consequently, they are not available on highly constrained platforms. Aiming to tackle this issue, we here investigate an alternative asymmetric encryption scheme that relies on lightweight arithmetic: McEliece. This scheme is especially appealing because, being based on error correction codes, it displays a simpler arithmetic and leads to better performance when compared to RSA or ECC. To evaluate the implementation of this scheme in hardware, we propose and analyze a flexible architecture whose security level and time vs. area usage characteristics can be reconfigured as desired. Namely, the proposed architecture is suitable for all usual security levels, ranging from 80 to 256 bits. It is also very efficient, being able to perform data decryption with binary Goppa codes in 56 µs with 3402 Slices on a Xilinx Spartan-3AN FPGA, while the best known result in the literature for the same FPGA is 115 µs with 7331 Slices. Alternatively, the architecture can operate with Quasi-Dyadic Goppa (QD-Goppa) codes, which involves smaller keys than traditional binary Goppa codes. In the latter case, for an 80-bit security level, the decryption operation can take from 1.1 ms with 1129 Slices to 68 µs with 8268 Slices. By choosing a more hardware-friendly decoding algorithm, focusing hardware resources on most bottleneck operations and sharing hardware resource for two different algorithms, better results than the literature were obtained.
Private information retrieval (PIR) allows clients to retrieve records from online database servers without revealing to the servers any information about what records are being retrieved. To achieve this, the servers must typically do a computation involving the entire database for each query. Previous work by Ishai et al. has suggested using batch codes to allow a single client (or collaborating clients) to retrieve multiple records simultaneously while allowing the server computation to scale sublinearly with the number of records fetched.
In this work, we observe a useful mathematical relationship between batch codes and efficient matrix multiplication algorithms, and use this to design a PIR server algorithm that achieves sublinear scaling in the number of records fetched, even when they are requested by distinct, non-collaborating clients; indeed, the clients can be completely unaware that the servers are implementing our optimization. Our multi-client server algorithm is several times faster, when enough records are fetched, than existing optimized PIR severs.
As an application of our work, we show how retrieving proofs of inclusion of certificates in a Certificate Transparency log server can be made privacy friendly using multi-client PIR.
This talk presents experiments to study the practical limits of the RFID technology, more specifically ISO14443 proximity cards, used e.g. in biometric passports, public transport, and contact-less payments. Our goal was to investigate the maximum distance at which such RFID cards can be activated or read. This maximum distance has implications for security, as attackers might secretly activate cards or eavesdrop on the communication between a card and a legitimate reader.
During this talk several antenna setups are discussed. Starting with a standard commercially available reader with a communication range of 5cm and ending with the current situation in which it is possible to communicate with an RFID tag at a distance of 50cm.
Fault attacks have been widely studied in the past but most of the literature describes only individual fault-injection techniques such as power/clock glitches, EM pulses, optical inductions, or heating/cooling. In this work, we investigate combined fault attacks by performing clock-glitch attacks under the impact of heating. We performed practical experiments on an 8-bit AVR microcontroller which resulted in the following findings. First, we identified that the success rate of glitch attacks performed at an ambient temperature of 100°C is higher than under room temperature. We were able to induce more faults and significantly increase the time frame when the device is susceptible to glitches which makes fault attacks easier to perform in practice. Second, and independently of the ambient temperature, we demonstrate that glitches cause individual instructions to repeat, we are able to add new random instructions, and we identified that opcode gets modified such that address registers of individual instructions get changed. Beside these new results, this is the first work that reports results of combined glitch and thermo attacks.
Promotor Bart Jacobs
The paper is available at http://libeccio.di.unisa.it/PINO/PAPERS/fc04.pdf
Promotor Bart Jacobs
With over 10.6 million repositories and 1.5 users, GitHub is currently the largest code hosting site in the world. Software engineering researchers have been drawn to GitHub due to this popularity, as well as its integrated social features and the metadata that can be accessed through its API. To make research with GitHub data approachable, we created the GHTorrent project, a scalable, off-line mirror of all data offered through the GitHub API. In our talk, we describe how we setup GHTorrent, how we build a community around it and what types of research it has been used for (including ours).
The paper is available at: https://eprint.iacr.org/2014/728 . It describes attacks on an authentication protocol PLAID -- a simpler cousin of IRMA. While the paper illustrates that security proofs are essential, I will also sketch why security proofs could have been deceptive in this instance.
There will be drinks afterwards in the Mecator 1 building.
One question that has received a lot of attention recently in quantum communication complexity is whether it is possible to perform quantum communication protocols in a private way. In this talk, we investigate the difference between two widely used concepts of privacy and we construct private quantum protocols for computing the Inner Product function and for Private Information Retrieval. See the paper on http://arxiv.org/abs/1409.8488 .
This talk will address the work that we (me and Fabian) have been doing regarding to the proposal of a security architecture for the Smart Electrical Vehicle Charging project developed by Enexis DSO.
Over the past five years we have witnessed the introduction of DNSSEC, a security extension to the DNS that relies on digital signatures. DNSSEC strengthens DNS by preventing attacks such as cache poisoning. However, a common argument against the deployment of DNSSEC is its potential for abuse in Distributed Denial of Service (DDoS) attacks, in particular reflection and amplification attacks. DNS responses for a DNSSEC-signed domain are typically larger than those for an unsigned domain, thus, it may seem that DNSSEC could actually worsen the problem of DNS-based DDoS attacks. The potential for abuse in DNSSEC-signed domains has, however, never been assessed on a large scale.
We have established ground truth around this open question. We performed a detailed measurement on a large dataset of DNSSEC-signed domains, covering 70% (2.5 million) of all signed domains in operation today, and compare the potential for amplification attacks to a representative sample of domains without DNSSEC. At first glance, the outcome of these measurements confirms that DNSSEC indeed worsens the DDoS phenomenon. Closer examination, however, gives a more nuanced picture. DNSSEC really only makes the situation worse for one particular query type (ANY), for which responses may be over 50 times larger than the original query (and in rare cases up to 179×). We also discuss a number of mitigation strategies that can have immediate impact for operators and suggest future research directions with regards to these mitigation strategies.
Many embedded systems are responsible for the security and safety of critical infrastructures. They are integrated in large environments and need to support several communication protocols to interact with other devices or with users. Interestingly, such firmware often implements protocols that deviate from their original specifications, some are extended with additional features, while others are completely undocumented. Furthermore, such embed parsers often consist of complex C code which is optimized to improve performance. However, this code is rarely designed with security in mind, e.g., it lacks proper input validation, which often makes those devices vulnerable to memory corruption attacks. Furthermore, most embedded designs are closed source and third parties security evaluations are only possbile from the binary firmware.
In our latest work we propose a methodology to identify and analyze parsers present in embedded firmware without access to its source code or documentation. We first locate and select parsing code statically then we record memory interactions while the firmware is executed and we extract information from the memory transactions. These steps are performed automatically, and make possible to rank the discovered functions according to their probability of containing a parser. We then re-execute the firmware using symbolic and concolic execution while replaying the previously recorded I/O memory transactions.
Finally, we demonstrate how our technique can be used to identify firmware components treating a peripheral’s input, perform automated reverse engineering to extract protocols, and discover and analyze bugs on four widely used devices: a Hard Disk Drive (HDD), a Programmable Logic Controller (PLC), a Power Meter and a GPS Receiver.
Symbolic execution is a program analysis technique that is used for many purposes, one of which is test-case genera- tion. For loop-free programs, this generates a test-set that achieves path coverage. Program loops, however, imply ex- ponential growth of the number of paths in the best case and non-termination in the worst case. In practice, the number of loop unwindings needs to be bounded for analysis.
We consider symbolic execution in the context of the tool Symbolic Pathfinder. This tool extends the model-checker Java Pathfinder and relies on its bounded state-space ex- ploration for termination. We present an implementation of k-bounded loop unwinding, which increases the amount of user-control over the symbolic execution of loops.
Bounded unwinding can be viewed as a naive way to prune paths through loops. When using symbolic execu- tion for test-case generation, naively pruning paths is likely at the cost of coverage. In order to improve coverage of branches within a loop body, we present a technique that semi-automatically concretizes variables used in a loop. The basic technique is limited and we therefore present annota- tions to manually steer symbolic execution towards certain branches, as well as ideas on how the technique can be ex- tended to be more widely applicable.
Big Data hebben de toekomst. Dankzij goedkope sensoren en grote digitale geheugens wordt mogelijk enorme hoeveelheden gegevens te bewerken. Deskundigen hopen daarmee maatschappelijke problemen aan te pakken, variërend van obesitas tot klimaatverandering. Sommigen dromen er zelfs van het politieke proces deels te kunnen vervangen door ‘algoritmische zelfregulering’. Zullen deze zelfreguleringstechnieken wel effectief zijn? En áls ze effectief zijn, wat zijn dan de ethische gevolgen? Deelname: €7,-. Meer info op: http://www.ru.nl/soeterbeeckprogramma/agenda/overzicht_op_datum/vm/lezingen/2014/big-data-small/
For the occasion of the Ph.D. defence of Ken Madlener 'Formally Verified Modular Semantics', Peter Mosses is our guest for a few days. He will talk about the PLanCompS project. This is a 4-year joint research project based at Swansea, Royal Holloway and City, funded by EPSRC, started in September 2011. It aims to establish and test the practicality of a component-based framework for the design, specification and implementation of programming languages. The main novelty will be the creation of a substantial collection of highly reusable, validated language components called fundamental constructs or funcons. Crucially, the semantic specification of each funcon will be independent, not needing any reformulation when funcons are combined or new funcons added to the collection. All specifications will be provided online in an open access repository, with browsing and searching supported by a digital library interface.
Some references of work by Peter Mosses.
- Reusable Components of Semantic Specifications. Churchill M., Mosses P.D. and Torrini P., Modularity 2014.
- Deriving Pretty-Big-Step Semantics from Small-Step Semantics. Bach Poulsen, C. and Mosses P.D., ESOP 2014.
- FunKons: Component-Based Semantics in K. Mosses, P.D. and Vesely, F., WRLA 2014.
- Modular Bisimulation Theory for Computations and Values. Churchill, M. and Mosses P., FoSSaCS 2013.
Co-promotor Sjaak Smetsers, promotor Marko van Eekelen
Network-based attacks represent nowadays one of the biggest challenges for both security experts and operators. In the last couple of years, we have seen attacks on some of the basic infrastrucure of the Internet (such as DNS); volumetric attacks (mostly Distributed Denial of Service attacks) have threatened the stability of the Internet core; and the rise of the DDoS-as-a-Service phenomenon. The intensity, frequency and ease with which these attacks are being carried out clearly calls for strong countermeasures. In the past, we mainly focused on intrusion detection. Nowadays, intrusion detection is only one step. First, a prerequisite for detection is monitoring. Second, there is a need for effective attack mitigation. This presentation will provide an overview of the current research conducted at the DACS group of the University of Twente touching all the above topics in network security.
In this talk we briefly explore the fundamentals of Bluetooth Low Energy and its main differences with "traditional" Bluetooth. We identify the security mechanisms that it provides and how they may be circumvented or are not available on mobile platforms. Because of that, we propose an application layer security mechanism in the form of a Bluetooth service intended to ensure authenticity and confidentiality of exchanged information. Time permitting, we show a prototype of how it can be used in practice with wireless sensors.
banking environment
Current authentication methods for financial transactions in online banking often rely on the customer's computer to provide and maintain integrity. Since these computers cannot be trusted due to malware threats, banks are implementing a new scheme to provide secure verification of critical transaction details. The focus of this talk will be on the real world problem which requires banks to reconsider their used authentication methods, the new authentication scheme they implement and the major security flaw in this new scheme. We propose an alternative scheme to work around this flaw, which is easier to use and at least offers the same security. A small primer on authentication in online banking is integrated in the talk for the uninitiated. Basic knowledge of authentication schemes is helpful but not a requirement to follow the talk and participate in discussions.
The language of propositional logic allows to express security properties by means of propositional rules in a precise and rigorous manner. Once a property is formulated, its automated verification can be handled by SAT solvers. This talk introduces the software package “cryptosat” and discusses its use for the generation and analysis of SAT instances encoding cryptographic algorithms from the ARX family (Addition Rotation eXclusive or). In this context, we will discuss to what extent genetic algorithms can be used in order to detect backdoors in cryptographic SAT instances.
Within TNO work is done on privacy from a strategic and policy perspective. On the strategic side we help in shaping new concepts, such as privacy by design and privacy and personal data markets, on the policy side we deal with the role of privacy in innovation and privacy and regulation. In the lunch meeting I will present the headlines of our approach and present an example for each of the headlines.
Today we have Hans-Cees Speel, Security Officer of Buro Jeugdzorg Gelderland, presenting the challenges of implementing an information sharing system for 'Youthcare'. The system interconnects many different organisations preventing, detecting and/or treating children with all kinds of mental/social/health problems. Obviously, the dossiers exchanged contain highly sensitive data. Hans-Cees is looking for our constructive feedback ;-)
The EU has spent most of a year holding meetings and hearings to 'understand' the problem but has not produced a single word on what concrete actions could regain the right to privacy for its citizens now. This while a July 2001 report on Echelon, the NSA/GCHQ precursor program to the current alphabet soup, explained the scope of the problem of electronic dragnet surveillance and made practical and detailed recomendations that would have protected Europeans and their institutions had they been implemented. Currently only Germany has seen the beginnings of policies that will offer some protection for its citizens. On Friday the 13th of June I will discuss the full scope of the NSA surveillance problem, the available technological and policy solutions and some suggestions about why they have not and are not being implemented (or even discussed).
Energy inefficient software implementations may cause bat- tery drain for small systems and high energy costs for large systems. Dynamic energy analysis is often applied to mitigate these issues. How- ever, this is often hardware-specific and requires repetitive measurements using special equipment. We present a static analysis deriving upper-bounds for energy consump- tion based on an introduced energy-aware Hoare logic. Software is con- sidered together with models of the hardware it controls. The Hoare logic is parametric with respect to the hardware. Energy models of hardware components can be specified separately from the logic. Parametrised with one or more of such component models, the analysis can statically pro- duce a sound (over-approximated) upper-bound for the energy-usage of the hardware controlled by the software.
The Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is mostly known for publishing protocol specifications, but it also produces informal publications and best practice documents. The talk will be about the organization of the IETF, as well as its processes and the lifecycle of publications. I will also share some experiences I've had during my participation over the years.
This talk will be on work done during my internship at Microsoft Research in Cambridge. I worked on automated analysis of structure-preserving signature (SPS) schemes. These schemes make use of bilinear pairings and can be very useful in the construction of new cryptographic operations like blind signatures. We can distinguish two different methods to analyse these schemes. The first method is using algebraic analysis, which relies on q-type assumptions. We developed a tool that can perform this analysis for a fixed number of oracle queries. The tool makes use of SMT solvers for the actual computations. The second method to proof SPS schemes secure is by reducing them to a known hard problem. Though this method provides stronger security guarantees, it is much harder to automate and this is still ongoing work.
The Cronto app is an IPhone and Android app for online banking. With this app the user's phone becomes an extra authentication factor. The bank customer uses the online-banking site on his PC to enter the transaction details. The banking site displays a colour QR code that the user scans with the Cronto app on is his phone. The app displays the transaction details from the QR code on the phone, the user checks the transaction and confirms it. The app computes a response code that the users enters on the banking site on his PC. I will discuss the inner workings of the Cronto app that I learned by reverse-engineering it.
Modern automative and aircraft industry faces the problem of designing and verifying reliable, secure and trustworthy on-board computers. As these systems must be able to control safety-critical systems such as the breaks of a car and the airbags, their certification occurs according to the highest levels set by the European governments. EUROMILS is a project funded by a European consortium with as aim a highly certified on-board chip architecture for future cars and planes. We present our part of this effort, namely the formal verification of such a chip architecture. This concerns security related properties such as non-interference between domains that may not interfere each other. For the verification we use the Isabelle theorem prover.
While green software is a popular topic in computer science nowadays, the average programmer still has little options for analysis of the energy-efficiency of his/her software. Analysis is mostly done dynamically, for which a complex measurement set-up is needed. Using a static analysis which predicts the energy-consumption, would be more accessible and more cost-effective. We present ECAlogic, a tool that implements a static analysis that can bound the energy consumption of algorithms. The tool is parametric with respect to a set of hardware component models. Its results are symbolic over the program parameters.
In the realm of multi-core processors and systems- on-chip, communication fabrics constitute a key element. A large number of queues and distributed control are two important aspects of this class of designs. These aspects make decomposition and abstraction techniques difficult to apply. For this class of designs, the application of formal methods is a real challenge. In particular, the verification of liveness properties is often intractable. Communication fabrics can be seen as a set of queues and flops interconnected by combinatorial logic. Based on this simple but powerful observation, we propose a novel method for liveness verification. Our method directly applies to Register Transfer Level designs. The essential aspects of our approach are (1) to abstract away from the details of queue implementa- tions and (2) an efficient encoding of liveness properties in an SMT instance. Experimental results are promising. Designs with hundreds of queues can be analysed for liveness within minutes.
Side-channel analysis is proposed in 1999 by Kocher et al. but the power of doing the analysis in the frequency domain is not utilized until 2005. Numerous following publications concentrated mostly on experimental results with only few attempting to support observations by theoretical analysis. In this paper, we start from a theoretical model of the signal and show that the algorithmic noise can amplify the leakage in the presence of Gaussian noise. On the other hand, under certain conditions, the algorithmic noise can remove the leakage acting as an SCA countermeasure. We conclude that the leaking frequencies depend on the underlying algorithmic noise and the phase between the noise and the leakage, which is overlooked by previous work so far. The observations made in the paper are also supported by experiments on a real target, and the results show that the difference can be as much as 2.5 times in terms of the number of traces required to recover the key.
In this talk I will provide a brief overview of what kept me busy in the past 4,5 years. From OV-chipkaart to IRMAcard, and everything in between.
Hot on the heels of the LEGO movie, now the first penetration tester made out of LEGO! The LEGO pen-tester was built by two Austrian students, Georg Chalupar and Stefan Peherstorfer, who visited last semester, with help from Joeri de Ruiter. It can automatically reverse-engineer the implementation of an internet banking token like ABN-AMRO's e.dentifier2, using LearnLib for the inference of a finite state machine. Within a couple of hours our LEGO hacker can learn a state space in enough detail to see the difference between the old and the new version of the e.dentifier2 - revealing the security flaw in the old version and confirming that this flaw has been removed in the new version. This shows that the LEGO hacker is already cleverer than the security evaluators employed by ABN-AMRO (but that is not saying much, unfortunately...)
Should we be worried about malware on smartphones and tablets? Can private data be stolen, phone calls be recorded and location data be used to track us? How serious are such threats? How secure are these devices really? By giving an in-depth overview of the security architecture of mobile devices we will try to address these questions, indicate the risks and highlight recent trends, such as ARM TrustZone and Mobile TPM.
The AVR micro-controller is often integrated in small and secure devices. In this talk, I will give a short introduction about mechanized operational semantics and how one can use the ACL2 theorem prover to verify AVR assembly code. I do not assume any pre-requisite and will do my best to make the talk accessible to every one.
My long term goals are (1) the verification of AVR cryptographic primitives and libraries and (2) explore the possibility of deriving symbolic power traces. These are very early research ideas and I would be happy to discuss them with you and hear about your feedback.
The Radboud University Nijmegen is planning to outsource her student mail facility. Since providing student mail is not a core business of the university, the board has decided to stop facilitating it ourselves, starting at the next academic year (summer 2014). Because the mail addresses the RU provides (@student.ru.nl) are the only addresses we use to communicate with our students and because in a number of cases students like to have an address that shows they are a student at our university, we plan to migrate this service to "the cloud". We would like to discuss the possibilities and threats of the migration of @student.ru.nl to the cloud, to be able to take your insights into account, as the project is just starting.
I will discuss how ideas from universal algebra can be used to axiomatize equations between programs. I'll focus on my completeness result for a local store monad. I recently joined the DS group (I'm working with Bart Jacobs) and this will be a fairly high level and accessible talk to introduce myself.
This talk will give short overview of the current legal framework for electronic signatures and discuss the new recommendations from the ECB for strong authentication. I will give an example of a setup usable for creating server-based signatures: so personal signatures created in a cloud. We can then discuss whether these server-based signatures fit within the presented legal frameworks and recommendations.
The C-DAX project aims at providing a secure overlay network, as an overlay over an IP network, that provides an information-centric network (ICN) tailored to the needs and the capabilities of smart grids. In this talk we will address how end-to-end security can be enforced in information-centric networks by proposing a protocol based on the concept of identity-based encryption, a type of public-key cryptography.
Classic two-factor authentication has been around for a long time and has enjoyed success in certain markets (such as the corporate and the banking environment). A reason for this success are the strong security properties, particularly where user interaction is concerned. These properties hinge on a security token being a physically separate device. This paper investigates whether Trusted Execution Environments (TEE) can be used to achieve a comparable level of security without the need to have a separate device. To do this, we introduce a model that shows the security properties of user interaction in two-factor authentication. The model is used to examine two TEE technologies, Intel’s IPT and ARM TrustZone, revealing that, although it is possible to get close to classic two-factor authentication in terms of user interaction security, both technologies have distinct drawbacks. The model also clearly shows an open problem shared by many TEEs: how to prove to the user that they are dealing with a trusted application when trusted and untrusted applications share the same display
Censorship-circumvention tools are in an arms race against censors. The censors study all traffic passing into and out of their controlled sphere, and try to disable censorship- circumvention tools without completely shutting down the Internet. Tools aim to shape their traffic patterns to match unblocked programs, so that simple traffic profiling cannot identify the tools within a reasonable number of traces; the censors respond by deploying firewalls with increasingly so- phisticated deep-packet inspection. Cryptography hides patterns in user data but does not evade censorship if the censor can recognize patterns in the cryptography itself. In particular, elliptic-curve cryptogra- phy often transmits points on known elliptic curves, and those points are easily distinguishable from uniform random strings of bits.
We introduce high-security high-speed elliptic-curve systems in which elliptic-curve points are encoded so as to be indistinguishable from uniform random strings.
Communication fabrics constitute a key component of multicore processors and systems-on-chip. To ensure correctness of communication fabrics, formal methods such as model checking are essential. Due to the large number of buffers and the distributed character of control, applying these methods is challenging. Recent advancements in the verification of communication fabrics have demonstrated that the use of inductive invariants provides promising results towards scalable verification of Register Transfer Level (RTL) designs. In particular, these invariants are key in the verification of progress properties. Such invariants are difficult to infer. So far, they were either manually or automatically derived from a high-level description of the design. Important limitations of these approaches are the need for the high-level model and the necessary match between the model and the RTL design. We propose an algorithm to automatically derive these invariants directly from the RTL design. We consider communication fabrics to be a set of message storing elements (e.g. buffers) and some routing logic in-between. The only input required by our method is a definition of when messages enter or leave a buffer. We then exploit this well-defined buffer interface to automatically derive invariants about the number of packets stored in buffers. For several previously published examples, we automatically generate the exact same invariants that were either manually or automatically derived from a high-level model. Experimental results show that the time needed to generate invariants is a few seconds even for large examples.
Many cyber-physical applications are responsible for safety critical or business critical infrastructure. Such applications are often controlled through a web interface. They manage sensitive databases, drive important SCADA systems or represent imperative business processes. A vast majority of such web applications are well-known to be vulnerable to a number of exploits. The focus of this paper is on the vulnerability of session stealing, also called session hijacking. We developed a novel method to prevent session stealing in general. The key idea of the method is binding the securely negotiated communication channel to the application user authentication. For this we introduce a server side reverse proxy which runs independently from the client and server soft- ware. The proposed method wraps around the deployed infrastructure and requires no alterations to existing software. This presentation discusses the technical encryption issues involved with employing this method. We de- scribe a prototype implementation and motivate the technical choices made. Furthermore, the prototype is validated by applying it to secure the particularly vulnerable Blackboard Learn system, which is a important and critical infrastructural application for our university. We concretely demonstrate how to protect this system against session stealing.
The best way to keep a secret, is being able to deny you are keeping any. *pol* is a commandline password manager with deniable encryption. A safe can have multiple containers filled with secrets. Each container has its own password. With only one password, an adversary cannot prove that there is another container. This is true, even if the adversary has multiple versions of the safe, with changes in the other container. In this talk, I will explain the cryptography behind pol.
I will focus on those parts of which I, as an amateur cryptographer, am not sure whether they are safe or optimal. More information can be found at: http://github.com/bwesterb/pol