# sensors and sensing ## **Design of Autonomous Systems** ### csci 6907/4907-Section 86 ### Prof. **Sibin Mohan** --- embedded/autonomous system --- embedded/autonomous system
### **perceives** the physical world via sensors --- ### **perceives** the physical world via sensors gather information about: - environment - its _own_ state --- ## sensors --- ## sensors critical component in...
--- ## sensors today, we discuss... ||| |------|-------| |
|
| || --- ## sensors modern autonomous systems → _wide array_ of sensors --- ## sensors modern autonomous systems → _wide array_ of sensors - measure **different** quantities, _e.g.,_ GPS, velocity, objects, _etc._ --- ## sensors modern autonomous systems → _wide array_ of sensors - measure **different** quantities, _e.g.,_ GPS, velocity, objects, _etc._ - sensor measurements often have **errors** --- ## core idea of sensor capture physical/chemical/environmental quantity... --- ## core idea of sensor capture physical/chemical/environmental quantity... ...**convert it to a digital quantity**! --- ## signals - by definition, sensors generate **signals** --- ## signals - by definition, sensors generate **signals** - mapping from the _time_ domain to a _value_ domain --- ## signals - mapping from the _time_ domain to a _value_ domain $$ s: D_t \mapsto D_v $$ --- ## signals - mapping from the _time_ domain to a _value_ domain $$ s: D_t \mapsto D_v $$ | symbol | description | |--------|------------------------------------------| | $D_t$ | continuous or discrete **time** domain | | $D_v$ | continuous or discrete **value** domain | || --- ## discrete
--- ## discrete **not** continuous → **sampling** --- ## discrete **not** continuous → **sampling**
red
arrows → instances of sampling ---
red
arrows → instances of sampling [we will come back to sampling later] --- ## **Types** of Sensors --- ## **Types** of Sensors - sensors come in various shapes and sizes --- ## **Types** of Sensors - sensors come in various shapes and sizes - designers → develop a **sensor plan** --- ## sensor plan considers - required functionality - sensor range(s) - cost --- ## sensor plan considers - required functionality - sensor range(s) - cost each autonomous system → its **own set of sensors** --- ## classification of sensors --- ## classification of sensors on _typical_ autonomous systems... --- ## classification of sensors ...based on physics used --- ## classification of sensors |physical property|sensor| |-----------------|-------| --- ## classification of sensors |physical property|sensor| |-----------------|-------| |[_internal_ measurements](#inertial-measurement-units-imu)| IMU | --- ## classification of sensors |physical property|sensor| |-----------------|-------| |[_internal_ measurements](#inertial-measurement-units-imu)| IMU | |_external_ measurements| GPS | --- ## classification of sensors |physical property|sensor| |-----------------|-------| |[_internal_ measurements](#inertial-measurement-units-imu)| IMU | |_external_ measurements| GPS | |["bouncing"
electromagnetic waves](#bouncing-of-electromagnetic-waves--lidar-and-mmwave)| LiDAR, RADAR,
mmWave Radar| --- ## classification of sensors |physical property|sensor| |-----------------|-------| |[_internal_ measurements](#inertial-measurement-units-imu)| IMU | |_external_ measurements| GPS | |["bouncing"
electromagnetic waves](#bouncing-of-electromagnetic-waves--lidar-and-mmwave)| LiDAR, RADAR,
mmWave Radar| |optical| cameras, infrared sensors| --- ## classification of sensors |physical property|sensor| |-----------------|-------| |[_internal_ measurements](#inertial-measurement-units-imu)| IMU | |_external_ measurements| GPS | |["bouncing"
electromagnetic waves](#bouncing-of-electromagnetic-waves--lidar-and-mmwave)| LiDAR, RADAR,
mmWave Radar| |optical| cameras, infrared sensors| |[accoustic](#ultrasonic)| ultrasonic sensors| || --- ## classification of sensors |physical property|sensor| |-----------------|-------| |[_internal_ measurements](#inertial-measurement-units-imu)| IMU | |_external_ measurements| GPS | |["bouncing"
electromagnetic waves](#bouncing-of-electromagnetic-waves--lidar-and-mmwave)| LiDAR, RADAR,
mmWave Radar| |optical| cameras, infrared sensors| |[accoustic](#ultrasonic)| ultrasonic sensors| || will focus on **some** of these --- some of them can be **combined** --- some of them can be **combined** to generate other sensing patterns --- some of them can be **combined** to generate other sensing patterns ### _e.g.,_ **stereo vision** using multiple cameras --- ### Inertial Measurement Units (**IMU**) --- ### Inertial Measurement Units (**IMU**) define movement of vehicles --- ### Inertial Measurement Units (**IMU**) define movement of vehicles - along **three** axes - **acceleration** - **directionality** --- ## IMU sensors --- ## IMU sensors
--- ## IMU sensors
includes a processor/microcontroller --- ## IMU sensors constituent parts of IMU ||||| |---------|--------|---------|-----------| |
|
|| || --- ## IMU sensors constituent parts of IMU ||||| |---------|--------|---------|-----------| |
|
|
|| || --- ## IMU sensors constituent parts of IMU ||||| |---------|--------|---------|-----------| |
|
|
|
| || --- ## IMU sensors constituent parts of IMU ||||| |---------|--------|---------|-----------| |
|
|
|
| || --- ## IMU sensors constituent parts of IMU ||||| |---------|--------|---------|-----------| |
|
|
|
| || let's look at the functionality of each one... -v- ## gyroscope -v- ## gyroscope **inertial** sensor -v- ## gyroscope **inertial** sensor → measures **angular rate** -v- ## gyroscope **inertial** sensor → measures **angular rate** |||| |--------|----------|---------| |
|
|
| | "yaw" | "pitch" | "roll" | || Note: MUs come in all shapes and sizes. These days they're very small but the original IMU's ver really large, as evidenced by the one used in the [Apollo space missions] -v- |accelerometer|magnetometer| |-------------|------------| |inertial acceleration|strength/direction of magnetic field| --- ## LiDAR --- ## LiDAR light detection and ranging --- ## LiDAR light detection and ranging - uses eye safe **lasers beams** - mapping surroundings - create **3D representation** --- ## LiDAR light detection and ranging - uses eye safe **lasers beams** - mapping surroundings - create **3D representation** Note: lasers are used for... - imaging - detection - ranging --- ### typical operation
--- ### typical operation
**round trip time**: $$ R = \frac{c\tau}{2} $$ where, `c` is the speed of light. --- 3D representation → **point cloud**
--- ### points clouds around autonomous vehicles
--- ## point clouds - **3D coordinates**, $(x, y, z)$ --- ## point clouds - **3D coordinates**, $(x, y, z)$ - **strength** of returned signal - **density**/material composition of objects! --- ## point clouds - **3D coordinates**, $(x, y, z)$ - **strength** of returned signal - **density**/material composition of objects! - additional **attributes**: - return number, scan angle, scan direction, - point density, RGB color values, time stamps - each can be used for refining the scan. --- ## scene illumination --- ## scene illumination | **flash** lidar
Note: - _entire_ scene using wide laser - receives all echoes on a photodetector array | --- ## scene illumination | **scan** lidar
Note: - very narrow laser beams, scan illumination spot with laser beam scanner - single photodetector to sequentially estimate $\tau$ for each spot --- ## scene illumination | comparison | | flash lidar | scan lidar | |----|----|------| | **architecture** |
|
| | **resolution** determined by | photodetector array pizel size (like camera) | laser beam size and spot fixing | | **frame rates** | higher (up to `100 fps`) | lower (< `30 fps`) | | **range** | shorter (quick beam divergence, like photography) | longer (`100m+`) | | **use** | less common | **most common** | || --- ## scene illumination | _image_ comparison consider the following "scene" (taken using camera)
--- ## scene illumination | **flash** image
--- ## scene illumination | **scan** image [16 lines]
--- ## scene illumination | **scan** image [32 lines]
--- ## scene illumination | image comparisons |flash lidar | scan lidar [16] | scan lidar [32] | |----|----|-----| |
|
|
| || --- ## potential problems atmospheric/environmental conditions → **false positives**
--- look up the [textbook chapter on sensing](https://autonomy-course.github.io/textbook/autonomy-textbook.html#sensors-and-sensing) - for more information about LiDAR - also the mmWave RADAR sensor --- ## Errors in Sensing --- ## Errors in Sensing - sensors deal with → physical world - **errors** creep up over time! --- ## Errors in Sensing | types - sensor **drift** - **constant bias** - **calibration** errors - **scale factor** - **vibration rectification** errors - **noise** Note: | error type | description | |----------------|-------------| | **sensor drift** | over time the sensor measurements will "drift", i.e., a gradual change in its output → away from average values (e.g., due to wear and tear) | | **constant bias** | bias of an accelerometer is the offset of its output signal from the actual acceleration value. A constant bias error causes an error in position which grows with time | | **calibration errors** | ‘calibration errors’ refers to errors in the scale factors, alignments and linearities of the gyros. Such errors tend to produce errors when the device is turning. These errors can result in additional drift | | **scale factor** | scale factor is the relation of the accelerometer input to the actual sensor output for the measurement. Scale factor, expressed in ppm, is therefore the linear growth of input variation to actual measurement | | **vibration rectification errors** | vibration rectification error (VRE) is the response of an accelerometer to current rectification in the sensor, causing a shift in the offset of the accelerometer. This can be a significant cumulative error, which propagates with time and can lead to over compensation in stabilization | | **noise** | random variations in the sensor output that do not correspond to the actual measured value | || --- ## Errors in Sensing each error → dealt with differently --- ## Errors in Sensing each error → dealt with differently can use **sensor fusion** to mitigate some errors --- ## analog-to-digital convertor (ADC) --- ## analog-to-digital convertor (ADC) $$s: D_t \mapsto D_v$$ --- ## analog-to-digital convertor (ADC) $$s: D_t \mapsto D_v$$ microcontrollers **cannot** read values → unless it is **digital** --- need to **convert** signals into **discrete** values --- $$s: D_t \mapsto D_v$$ $D_v$ must be **discrete** Note: A discrete signal or discrete-time signal is a time series consisting of a sequence of quantities. Unlike a continuous-time signal, a discrete-time signal is not a function of a continuous argument; however, it may have been obtained by sampling from a continuous-time signal. When a discrete-time signal is obtained by sampling a sequence at uniformly spaced times, it has an associated **sampling rate**. --- ## sampling **convert analog → digital** --- ## sampling **convert analog → digital** |analog signal||| |-------------|-------------|--------| |
||| || --- ## sampling **convert analog → digital** |analog signal|sampling rate|| |-------------|-------------|--------| |
|
|| || --- ## sampling **convert analog → digital** |analog signal|sampling rate|sampling| |-------------|-------------|--------| |
|
|
| || --- ## adc - device that converts analog → digital - very common circuit/microcontroller - any _physical_ sensor signal → pass through ADC --- ## adc | example
analog
signal is **discretized** into the
digital
signal --- ## adc | sequence - **sample** the signal - **quantify** → determine resolution - set **binary values** - **send to system** read digital signal --- ### adc | important aspects 1. sampling rate 2. resolution --- ## ADC | **Sampling Rate** - samples per second (SPS or S/s) - **how many samples** (data points) are taken - more samples → higher frequencies --- ## sample rate: $ f_s = \frac{1}{T} $ --- ## sample rate: $ f_s = \frac{1}{T} $ |symbol|definition| |------|----------| |$f_s$ | sampling rate/frequency| |$T$ | period of the sample | || ---
|symbol|value| |------|----------| |$f_s$ | `20 Hz` | |$T$ | `50 ms` | || ---
|symbol|value| |------|----------| |$f_s$ | `20 Hz` | |$T$ | `50 ms` | || this is _slow_ but, signal frequency: `1 Hz` --- what if, sampling frequency << signal frequency? --- what if, sampling frequency << signal frequency? ## aliasing! --- ## aliasing! reconstructed signal → differs from input! --- ## aliasing! reconstructed signal → differs from input!
--- ## **Nyquist-Shannon Sampling Theorem** --- ## **Nyquist-Shannon Sampling Theorem** > sampling rate = **at least twice the highest frequency component** --- ## **Nyquist-Shannon Sampling Theorem** > sampling rate = **at least twice the highest frequency component** if less → aliasing creeps in! --- ## $f_{Nyquist} = 2* f_{max}$ --- ## $f_{Nyquist} = 2* f_{max}$
|symbol|definition| |------|----------| |$f_{Nyquist}$ | Nyquist sampling rate/frequency| |$f_{max}$ | the maximum frequency that appears in the signal | || --- ## adc | **resolution** --- ## adc | **resolution** - directly related to precision - determined by **bit length** --- ## adc | **bit length**
--- ## adc | **bit length**
increasing bit lengths → digital signal better representation --- ## adc | **true** resolution --- ## adc | **true** resolution $$ Step Size = \frac{V_{ref}}{N} $$ --- ## adc | **true** resolution $$ Step Size = \frac{V_{ref}}{N} $$ |symbol|definition| |------|----------| |$Step Size$| resolution of each level in terms of voltage| |$V_{ref}$ |voltage reference/range of voltages| |$N = 2^n$ | total "size" of the ADC| |$n$ | bit size| || --- ## numerical example - sine wave, voltage = `5V` - ADC chip precision = `12 bits` --- ## numerical example - sine wave, voltage = `5V` - ADC chip precision = `12 bits` - $N = 2^{12} = $ `4096` --- ## numerical example - sine wave, voltage = `5V` - ADC chip precision = `12 bits` - $N = 2^{12} = $ `4096` - $Step Size = 5V /\ 4096$ = `0.00122V` (or `1.22mV`) --- system can tell → voltage level changes by `1.22 mV`! --- what happens if, $n=4$? --- let's **visualize** this issue --- consider the following signal:
--- we sample the signal,
--- we sample the signal,
we get →
9
measurements --- if **ADC bit width** →
2 bits
--- if **ADC bit width** →
2 bits
we can store →
4 values
--- if **ADC bit width** →
2 bits
we can store →
4 values
hence, we **cannot** store
9
values! --- if we pick the
4
_best_ values --- if we pick the
4
_best_ values the _digital_ recreation looks like,
--- if we pick the
4
_best_ values the _digital_ recreation looks like,
this is **not** a good recreation of original signal! --- what if ADC bit length →
4
bits? --- ADC bit length →
4
bits →
16 values
can **easily fit** all
9
values! --- _digital_ representation:
--- _digital_ representation → not _quite_ the same as original!
--- _digital_ representation → not _quite_ the same as original!
but still better than the previous situation --- to get higher fidelity, we need: 1. more samples and 2. higher bit widths --- sampling **frequency** and **resolution** → **quality** of ADC output