Bot Detection based on Cognitive Modeling and Generative Adversarial Networks
Most of the current CAPTCHAs have been designed to be used in a web interaction based on cognitivge tasks. BeCAPTCHA-Mobile and BeCAPTCHA-Mouse explore the potential of mobile devices and latest neuromotor analysis to model human-machine interaction for bot detection applications.
BeCAPTCHA-Mobile is based on swipe gestures (i.e. drag and drop task) and accelerometer data. We model this gesture according to features obtained from the touchscreen and accelerometer sensors in order to extract cognitive and neuromotor human features that help us to discriminate between bots and human users just with simple drag and drop gestures.
BeCAPTCHA-Mouse explore the potential of behavioural biometrics patterns to distinguish between malicious software bots and human users. Our technology uses an extended set of features derived from mouse trajectories to improve the bot detection in active or passive setups. This set of features is inspired by recent advances in neuromotor modeling of human movements.
Modelling based on In-Built Sensors
Accelerometer, gyroscope, gravity sensor, touchscreen gestures, keystrokes, light sensor, WiFi, Bluetooth, camera, and microphone are some examples of sensors/signals acquired by a smartphone while we interact with it or just carry it with us during our daily routines. Those data can be used to model human-machine interaction and human behavior.
- Accelerometer and gyroscope are both useful to measure the movements that the smartphone is exposed to: the accelerometer measures the magnitude and direction of acceleration forces applied over the mobile device and the gyroscope measures orientation.
- Touchscreen gestures involve all kind of finger movements that we perform over the smartphone screen (e.g. swipe, tap, zoom).


Training Process based on GANs
We employ a patented GAN (Generative Adversarial Network) architecture, in which two neuronal networks, commonly named Generator and Discriminator, are trained in adversarial mode. The Generator tries to fool the Discriminator by generating fake samples (touch trajectories and accelerometer signals in this work) very similar to the real ones, while the Discriminator has to discriminate between the real samples and the fake ones created. Once the Generator is trained, then we can use it to synthesize swipe trajectories very similar to the real ones.

Neuromotor Analysis of Mouse Trajectories
By just looking at the human mouse movements, we can already observe some aspects typically performed by humans during mouse trajectories execution: an initial acceleration and final deceleration performed by the antagonist (activate the movement) and agonist muscles (opposing joint torque), and a fine-correction in the direction at the end of the trajectory when the mouse cursor gets close to the click button (characterized by a low velocity that serves to improve the precision of the movement). These aspects motivated us to use the neuromotor analysis to find distinctive features in human mouse movements. Neuromotor-fine skills, that are unique of human beings are difficult to emulate for bots and could provide distinctive features in order to tell humans and bots apart.
For this, we propose to model the trajectories according to the Sigma-Lognormal model from the kinematic theory of rapid human movements. The model states that the velocity profile of the human hand movements (mouse movements in this work) can be decomposed into primitive strokes with a Lognormal shape that describes well the nature of the hand movements ruled by the motor cortex.
