Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Paper Discussion 10b: Risks of Trusting the Physics of Sensors #80

Open
searri opened this issue Mar 9, 2020 · 12 comments
Open

Paper Discussion 10b: Risks of Trusting the Physics of Sensors #80

searri opened this issue Mar 9, 2020 · 12 comments
Labels
paper discussion s20 The discussion of a paper for the spring 2020 class.

Comments

@searri
Copy link
Contributor

searri commented Mar 9, 2020

  • @ratnadeepb, Comprehension: Backdoor coupling vulnerabilities occur often in electrical systems
  • @hjaensch7, Critical: Who is responsible for making sure hardware is secure?
  • @chandaweia, Critical: Is it reasonable for students to get the good interdisciplinary education the paper calls for?
  • @ericwendt, Comprehension: What does it mean to shift away from "component-centric security"?
  • @rebeccc, Critical: Paper underplays the complexity of manufacturers needing to specify how to secure the physics of sensors
  • @pcodes, Critical: What can be done at the manufacturing stage to improve security?
  • @rachellkm, Comprehension: How common are transduction attacks?
  • @gkahl, Comprehension: Would having multiple (redundant) sensors help secure against transduction attacks?

Shared concerns:

  • What's the performance impact of making sensor output continuously checkable?
  • Paper seems too shallow/doesn't list enough details/solutions on things it brings up
  • Education section seems to dodge responsibility for security and place it on the up-and-coming generation
@gparmer gparmer added the paper discussion s20 The discussion of a paper for the spring 2020 class. label Mar 22, 2020
@ratnadeepb
Copy link
Contributor

ratnadeepb commented Mar 22, 2020

Reviewer: Ratnadeep Bhattacharya
Review Type: Comprehension

Main Problem Solved
The paper looks more like a survey paper, maybe even a newspaper article. But the premise of the paper is very interesting; it definitely rekindled a long-lost love for physics.

Anyhow, the paper talks about transduction attacks which exploit a vulnerability in the physics of the sensor to introduce intentional errors.

Main Contributions
The paper talks about the different surfaces for transduction attacks and possible remedies that include educational solutions. The last part of the paper did not interest me much but I did think I could provide a few examples to make the paper a little more lucid for the not-so-physics-minded.

Threats: Thieves can break into cars using Man-in-the-Middle (MITM) attacks against keyless entry systems. However, transduction attacks use unintended functions of the circuitry to threaten the integrity of a system. We will come back to MITM that can be done with transduction attacks.

Two types of analog threats:
1. Opportunistic attacks requiring no special equipments
2. Advanced attacks that require specialised equipments and advanced knowledge of physics

DolphinAttack: MEMS (microelectromechanical systems) can hear ultrasound despite efforts to attenuate them which can make voice recognition systems execute phantom commands.

Malicious Backdoor Coupling: It’s a signal that enters the system indirectly by coupling to its wires or other instruments. As an undergrad, I spent a lot of time studying high voltage power transmission. There are these backdoor couplings all over electrical systems. For example, high voltage power lines generate very strong magnetic fields that directly couple with telephone lines distorting telephone conversation. Another example of such indirect coupling can be found in optical network cables. Only a slight bend in the cable is enough to leak the light from the glass into the insulating sheath distorting the system. Furthermore, light is essentially an electromagnetic signal. The magnetic field can extend outside the wires and can be coupled with equipment that is sensitive enough in order to execute MITM attacks.

A possible example of resonance attacks
Optical networking cables are made of glass (silica) which contains hydroxide ions (OH-). These ions have a resonance frequency at 680nm (this number is from memory; couldn't find a source, so might be somewhat erroneous). So if one can introduce a signal at 680nm into an optical fiber than he or she can induce the cable to resonate at which point the cable will introduce massive distortions to all signals passing through. (This problem is not as devastating as it used to be and the example might be a little contrived)

Software Security Tools are not geared towards preventing transduction attacks.
• The paper advises a shift from component-centric security to system-centric security.
○ This point harks back to an earlier paper that we read that talked about the Tock operating system.
• Making the output of sensor checkable by software
○ For example, it should be easy to check if we are receiving signals on an optical cable at 680nm.
• Manufacturing circuits in a manner to reduce effects of resonance
○ The paper talks about drilling into a sensor board such that the vibrations are above the resonant frequency of the board.
○ Generally, optical signals are not sent at around 680nm (somewhat dated information)

Questions
The paper, while tickling my interest, was perhaps not deep enough to raise immediate questions.

@hjaensch7
Copy link
Contributor

hjaensch7 commented Mar 23, 2020

Reviewer: Henry Jaensch

Review Type: Critical Review

Problem Being Solved

Cyber physical systems rely on sensor readings to do useful work. Sensors are an important part of the feed back loop between sensors and actuators. Standard security mechanisms exist to prevent against software based attacks. This model assumes that sensors can be trusted completely. This creates an avenue for attack that is referred to as a transduction attacks. These attacks take advantage of the physical materials used to make sensors. For example emitting a wave that oscillates at the resonant frequency of an accelerator can manipulate it's output.

Main Contributions

This paper provides an evaluation of how some of these attacks work and some ways that these attacks can be prevented. Examples of attacks include submitting voice commands to voice assistants using ultrasonic waves, or even spoofing sensors that do object detection on a Tesla. The implications of these attacks are widespread and potentially deadly. This paper identifies that while there are some preventative measures that can be implemented in software - checking sensor results - a lot can be done on the manufacturing side to prevent sensor attacks. The paper also acknowledges that education and critical team composition choices can help to prevent attacks like these. Even having an engineer recognize that components being used to make a system pose potential risks could avoid some of these issues.

Questions

  1. Where does the onus lie in terms of resonant frequencies. If a manufacturer can modify the chip to avoid coupling, should they? or does a request for "physical trenches" around chips suffice?

  2. A software engineer can produce useful work surrounded by a world of abstracts that exist like black boxes. Abstractions are what allow a software engineer to do useful things. There is a balance between understanding everything down to the voltage moving across the chip and never declaring a variables type. A question that might be worth asking is where does embedded systems education fit into curriculum? EE/CS/CE, maybe even Systems Engineering?

Critiques

  • More details regarding the examples would've been useful for understanding the vulnerabilities. The Tesla example had nice graphics but lacked a thorough explanation of the attack. This was true for many of the examples.

  • The notion that autonomous systems should remain trustworthy despite untrustworthy components is interesting but also immediately thrown away when discussing manufacturers changing chip design to avoid issues like low resonant frequency.

@chandaweia
Copy link
Contributor

Reviewer: Cuidi Wei
Review Type: Critique

Problem being solved
This paper mainly tries to solve the threats to the underlying physics of sensor technology caused by transduction attack.

Main contributions
This paper mainly talks about transduction attack by introducing the threats and vulnerabilities of sensors and proposes approaches to cope with threats to the underlying physics of sensor technology. It suggests that a team needs with interdisciplinary teams and system-security engineers needs have interdisciplinary knowledges.

Questions and Critiques

  1. How to manufacture circuits in a manner to reduce effects of resonance? Why it will make attacks more difficult?
  2. I think it’s very arduous for a student to accept interdisciplinary knowledges because of the overwhelming knowledges of embedded systems and physical machines, so is it possible for students to learn well about the interdisciplinary knowledges just in few years?
  3. For many sensors, time is strict. Does it cost too much if make the security of sensor output continuously checkable?

@ericwendt
Copy link
Contributor

Reviewer: Eric Wendt
Review Type: Comprehension

Summary
This paper brings up a few security issues involving physical attacks, mostly dealing with sensors. The idea they express is that transducer attacks are widely unpredictable and affect a large amount of devices. They also talk about future student mentalities and curriculum for computer science.

Main Contributions
This paper really doesn't talk about much, just a few physical attacks on sensors, and difficulties introduced on the attacker's end. One mention was an attack that was so simple it didn't require any special equipment. The other attack would require equipment of some sort like LRAD. Some strategies for giving students a more well-rounded education were discussed.

Questions

  • They say shift away from a component-centric security model in favor of system-centric. What does this mean?? I don't understand their explanation. Isn't the goal of security to mitigate every possible point of failure? Seems unnecessarily verbose.
  • How can you detect a MEMS attack in software?
  • What kind of costs are incurred on adding physical protection units/software to low-powered IoT devices?

@rebeccc
Copy link
Contributor

rebeccc commented Mar 23, 2020

Reviewer: Becky Shanley
Review Type: Critical Review

Problem Being Solved
This paper introduces a new dimension of security in the IoT -- the physics of sensors and how the exploitation of this can go undetected by the device which can have a dangerous impact on the physical world. It explores the history of these sensor vulnerabilities and how attacks are being conducted on them.

Main Contributions
This paper contributes to solving the problem of sensor exploitation by studying the attributes a system would need in order to remain trustworthy despite the "untrustworthy components". It also identifies holes pertaining to embedded system security in the current Computer Science education system that could be contributing to the lack of attention in these areas.

Questions

  1. Why is component-based security so dangerous to their problem? Wouldn't the isolation of components keep the rest of the system safe from malicious attacks on one component?
  2. I believe that they're threats but I don't understand the physical consequences of many of the threats. What would the attacks detailed in the introduction do to users other than annoying them? They mention attackers being able to deliver intense sound waves, and controlling the accelerometer, but what does that do to the device/to the physical world?

Critiques

  1. This read much more like an article or a shallow survey than any of the other papers we've read so far. I feel like it failed to make any substantial moves in this problem domain because it lacked any experimentation or methodology.
  2. Going off of that critique, I felt as though the sections making suggestions to systems made huge assumptions. Specifically, the "Specify physical security " section assumes the physical fixes proposed by companies are "simple". I don't know who the target audience is for these physical fixes but for me particularly just reading about the fixes alone was very confusing. It seems like a disaster for a manufacturer to have to coordinate with their users repairing their own devices.
  3. The ending of the paper is really weird to me. Although I agree that embedded security should be more apparent in the undergraduate CS curriculum, I don't feel like it contributes to solving the problem they identified productively. It seems very much like, "Well, we couldn't do it, so people need to be taught the basics in college so they can do it for us". I don't know, it was just a weird thing to spend 1/4 of the paper on.

@tuhinadasgupta
Copy link
Contributor

Reviewer: Tuhina Dasgupta
Review Type: Critical Review

Problem:
This paper discusses a new vulnerability in IoT devices: sensors. It discusses different types of attacks, some more "physical", like spoofed data, and some more "electric", like man-in-the-middle attacks.

Contributions:
The paper covers the history of attacks and provides in-depth coverage of some types of attacks, like transduction attacks. They recommend using classic engineering approaches to solve attacks that target the physics of sensors. They also more broadly suggest that IoT development should be more interdisciplinary going forward, education of embedded systems should be widely available, and that students should be knowledgeable about the foundations of embedded systems that are abstracted upon.

Questions:

  1. How pervasive is the issue of malicious sensor attacks both "physical" and "electric"? Because this is the first time I've considered or heard of this being a security risk...
  2. How are current sensor security issues being addressed? With the example of autonomous vehicles being in the paper, I'm very curious about how high-risk device sensors being kept secure.

Critiques:

  1. The end of the paper seems kind of disjoint and sort of like a baton pass to the next generation rather than a conclusion and a talk about the expansion of this research.
  2. I still don't really understand why a return to the basics is so necessary to combat this security risk. I understand why understanding the fundamentals and not just taking them for granted is necessary (thanks to OS, Computer Architecture) but not totally sold on this paper's "back to the basics" approach
  3. Not sure if this was meant to be more of a survey paper but I was kind of surprised how much it felt like one.

@pcodes
Copy link
Contributor

pcodes commented Mar 23, 2020

Reviewer: Pat Cody
Review Type: Critical

Problem Being Solved

Many IoT systems suffer from security flaws inherent in the physical properties of the sensors, rather than insecure code. This opens these systems up to a transduction attack, whereby a vulnerability in the physics of a sensor are exploited.

Main Contributions

This paper highlights the danger of blindly trusting components in an embedded system, as physical sensor properties can be exploited to create bad readings, even when the code is written correctly. It proposes several methods to mitigate these security vulnerabilities, namely by reducing the trust placed in each sensor component. It also discusses improving the education of engineers to be more well-rounded, and not just specific to writing software or making hardware, but a mix of both.

Questions

  • How can we actually avoid having component-centric security? They discuss having the results be continuously checkable, but they also mention the phrase "system-centric tolerance of untrustworthy components". Is the former an example of the latter, or are there other things we can do to improve this system-centric tolerance?
  • How effective is pushing the proposed security improvements down to the customer, and are there things that can be done at the manufacturing level to improve security? I realize some of the flaws are a result of how customers are installing the circuit boards, but I am wary of putting the responsibility to fix the circuit board flaws on the consumer.

Critiques

  • The education section of the paper seems a little out of place. While the authors bring up some interesting points, it doesn't really connect to the technical analysis they did in previous sections, and I wish they had spent more time discussing their proposed solutions to improving embedded security.
  • I would have liked to see more examples in their proposed solutions section of what they were hoping to see.

@rachellkm
Copy link
Contributor

Reviewer: Rachell Kim
Review Type: Comprehension

Problem Being Solved:

This paper discusses the threats against physical sensors and the pitfalls of trusting the physics of sensors within embedded devices. More specifically, the paper discusses transduction attacks performed against sensors that allow adversaries to manipulate output data and mechanisms to prevent such attacks.

Main Contribution:

The authors of this paper survey various vulnerabilities of physical sensors within embedded systems and suggest a few security measures to improve resilience against transduction attacks. Namely, they suggest techniques from embedded security as well as consumer side modifications to better improve overall system reliability.

Questions:

  1. The way the problem was presented in this paper made it seem like direct attacks against sensors to manipulate output was a relatively “new” issue or at least an area that needs a lot more research. How common or uncommon are these types of attacks relative to software attacks on systems?
  2. The authors mention that the key to trustworthy sensors is to allow systems to repeatedly check the integrity of sensor outputs. What are some known techniques to do this other than exposing spectral analytics from the sensors (if they have them)?

@gkahl
Copy link
Contributor

gkahl commented Mar 23, 2020

Reviewer: Greg Kahl

Review Type: Comprehensive

Problem

Many embedded systems trust the inputs they receive from secure sensors, however there are ways to physically interfere with sensors, that may be considered secure, to change the data they receive and send to the system.

Contribution

This paper discusses the ways that sensor data can be manipulated by outside influence, such as transduction attacks which take advantage of the physics of the sensor to interfere with the data being received using a different observable medium. An example of this is using resonant audio frequency to interfere with accelerometer data. Another transduction attack takes advantage of the coupling in the wires of the sensor to spoof or jam data readings. The paper explores the ways that the physics of the sensors can be abused to change data, and a couple approaches which have been explored to try and create secure sensors or monitor when they are being tampered with.

Questions

1 - They discussed having sensor output continuously checkable, but how does checking the data continuously let you know whether the data being received is valid or not? In the example of the Telsa obstacle detection, how would checking the data continuously tell them if there really is an obstacle or if it is being spoofed?
2 - Would having multiple of each of the sensors help to reduce these attacks? Even if they are the same type of sensor, would the interference produce different results in each one if they were in a different location or had a different angle on the target being sensed?
3 - When they discussed attacks that use the hardware in the device to produce the interfering signals, how would a manufacturer/designer prevent their own hardware from interfering, such as with the built in speaker?

@mjhegarty
Copy link
Contributor

Reviewer: Michael Hegarty
Review Type: Critical

Problem

Cyber physical systems rely on sensors to give them important and timely information about the environment. Sensors, similar to communication protocols, can be subjected to a variety of different types of attacks on them that can prevent or modify the information that they transmit. These attacks, unlike communications attacks, cannot be prevented in software by something like encryption, leading to new challenges that must be addressed in the IoT field.

Contribution

The paper lists a variety of different vulnerabilities that can occur in sensors, and gives board advice about how they should be handled and about the future of embedded systems. They categorize the attacks into two groups, front-door attacks and back-door attacks. Front-door attacks occur from inputs that the sensor is designed to capture, such as using ultrasound to send voice commands to a voice activated device. Back-door attacks occur from sensors picking up information from a source that they were not designed to gather information from, such as sound wave sent at a resonant frequency being able to affect an accelerator. They recommend some ways to combat these systems such assuming sensor inputs to be untrustworthy and considering security when assembling the actual circuit board of the sensor.

Questions

  1. For a system that verifies whether sensor input is trustworthy or not, what happens when it is found to be non-trustworthy? An airplane with a pilot can adapt to one of their sensors not giving
    accurate readings, but if an autonomous drone can't trust its telemetry, what is it supposed to do?
  2. How at risk are day to day sensors such as car accelerometers to being exploited by our smart phones' speakers? For example can an ad playing sound from my phone while I am in the car cause my airbags to go off?
  3. What are the implications of actuators being able to affect nearby sensors in multi-tenant embedded systems?

Critiques

  1. I wish the paper listed more examples of Back-door attacks to give us a better sense of what types of things can influence different types of sensors.
  2. The Educations section felt a little out of place in the paper and not very useful.
  3. I wish the physical security section went more in depth about what can be done on the physical side of things and maybe even discuss the idea of red-teaming physical sensors to look for vulnerabilities on the hardware side of things.

@anguyen0204
Copy link
Contributor

Reviewer: Andrew Nguyen
Review Type: Critical

Problem

Sensors are transducers that find themselves commonplace in necessities within the technological realm of things in many everyday devices, machines, and equipment. Because of this, there is a large risk for attacks dedicated to these things. The plethora of sensors deployed now in the world make it a large risk for various people and entities due to the newfound emergence of sensory-dedicated attacks. As a result, this paper dives into the various examples of attacks and some of the methods used to mitigate it and nonetheless provide insight to the reader for it.

Contribution

Firstly, sensors face two types of analog attacks, the first being opportunistic attacks that require no special equipment to tap into the sensors' vulnerabilities, and second, advanced attacks that would trick the user or prompt them to engage with the attack in some way or another. Back-door attacks are also dangerous types of attacks due to the idea of back-door coupling and its ability to have the sensory disruptive waves override actual necessary functions of a device. Implementing trustworthy embedded systems would be the solution. The key criteria for this are:

  • shift from component-centric security to systsem-centric
  • make the output of sensor hardware to be more continuously checkable
  • make attacks more difficult by manufacturing circuits in a manner to reduce effects of resonance

Questions

  1. How can you readily identify for incoming sensory attacks? Or maybe has there been anything in the works to clearly represent or show it?
  2. Could devices interfere with each other when trying to prevent a sensory attack? What if both devices are trying to defend themselves respectively. Would they in turn affect each other?

### Critiques
The paper did not give specifics or a lot of numbers that happen in attacks. Are there any amounts that could represent the severity of this issue? It would be nice to be able to correlate how severe and how impactful each and every (in general) the types of attacks are. In addition, back-door attack and back-door couplings were really nice parts of the paper. It could have been more touched upon and correlate that with specific embedded systems.

@Others
Copy link
Contributor

Others commented Mar 24, 2020

Reviewer: Gregor Peach

Review Type: Critical Review

Problem Being Solved

When building software that runs on microcontrollers, our software has to be made secure--we understand this. But often programmers forget about the physical security properties of the system. While unfettered physical access may always lead to device compromise, we need to be cognizant of our security model. How can we deal with sensor fraud and

Main Contributions

This short article outlines the basic problems with blindly trusting sensors, then outlines some potential solutions to them. (Short article.)

Questions

  1. How can we implement interdiciplinary techniques? What does that really look like?
  2. What sorts of education can professors be implementing so that students are prepared to work on the IoT?
  3. I don't really get the section on verifying sensor outputs. How can we verify something

Critiques

  1. This is a bit short and shallow--more like a newspaper article than a paper
  2. I thought the "back to basics" section was a bit silly. Obviously some students demphasize hardware -- we are software people after all--but there are several leaps in logic here. I think the main problem in IoT security is lack of security first attitudes, and careful threat models, rather than students misunderstanding formally verified code, for example.
  3. I wish there were more concrete examples of problems the suggested approaches would solve

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
paper discussion s20 The discussion of a paper for the spring 2020 class.
Projects
None yet
Development

No branches or pull requests