pilfershush promotion photo

Concept

this is a work in progress (May 2021)

NUHF synth here: nuhf-synth.html

PilferShush examines the hidden methods that the Internet-of-Things uses to communicate with mobile phones and how its unregulated spread can challenge the way we think about public and private space.

This hidden method is the broadcast of inaudible sounds from IoT devices and services using near-ultra high frequency (NUHF) audio between 18 kHz to 24 Khz. The purpose of using such hidden sounds is to identify the person and their phone without our awareness.

Another method, increasingly popular, is to use Audio Content Recognition (ACR) systems. These SDKs are embedded in apps and utilise the microphone to record television adverts, programs and other "ambient" sounds.

Today's urban landscape is filled with creepy teddy bears, billboards that identify you, televisions that watch you, buildings and shops that track you and phones that will always listen to what you say. Combined they form part of a massive evolution in the way technology defines the economy and mediates social relations.

The deriving of profit from digitised identities, behaviours and actions has seen the rise of outsourcing of labour at the forefront of the digital economy. From the isolated and speculative income of the Uber driver and the unprotected food delivery rider, to the access of vital social services via online apps; the sharing economy socialises the risks, debts and running costs while privatising the profits.

One method to extract profit from human digital labour is to digitise the Time and Motion surveillance of the factory into software known as trackers. This type of software is typically made available for developers to include within their apps. It can provide a source of revenue via the exchange of gleaned statistics about who is using the app and how. Companies profit from this type of information by connecting the capabilities of tracker software with massive relational databases. These databases exist for the specific purpose of identifying people by their behaviours, their traits, their habits and their possessions. And from here the professed intention is to anticipate "your needs" before you are aware of them.

Other than overt surveillance, an app with an SDK in it that uses the microphone for NUHF recordings can be used for contactless payments. While it may offer a secure method with encrypted data, the reliance on audio signals from a potentially unknown source that cause payment transactions directly on the mobile phone may not be the most sensible use of this technology. Several of the companies involved in this area also have partnerships with advertising/marketing tracker companies or are directly involved in advertising. In some ways the use of the mobile phone's microphone could viewed as if it were a bluetooth radio: something that needs to be turned on manually and has an indication that it is running and can potentially record any audio.

Understanding this process may reveal the current scale of internet based surveillance and why privacy and the right to privacy is important. And why your phone doesn't need to record your conversations to know anything about you.

ACR Audio Tracker demo, an Audio Record capture demonstration.
To determine an answer to the question "is my device listening to me?", a demonstration is examined that uses an Android mobile phone, a game app downloaded for free from Google Play and some analysis software.
Link: Audio Tracker demo

Known types of NUHF signals (also Audio Content Recognition, and Voice Interactive Adverts)
SDK package pronouns: acrcloud, actv8 (cifrasoft), alphonso(ACR), axwave(ACR, NUHF), beatgridmedia(ACR), bitsound (soundlly), chirp, chromecast(NUHF), cifrasoft(ACR, NUHF), copsonic, cueaudio, digimarc(ACR), dv (dov-e), fanpictor, fidzup, fluzo(ACR), gracenote(ACR), hotstar(ACR)(zapr), hound(ACR), inscape(ACR), instreamatic (VIA), kantarmedia(ACR), lisnr, moodmedia, mufin (ACR), prontoly (now sonarax), realitymine(ACR), redbricklane(ACR)(zapr), runacr, shopkick, signal360, silverpush, sonarax, soniccode, sonicnotify (now Signal360), soundpays, tonetag, trillbit, uber, zapr(ACR)

The list below is not complete:

IoT beacons - Actv8, Bitsound, Cue Audio, Fidzup, Shopkick, Signal360(SonicNotify), Soundpays, ToneTag, Trillbit
Voice Assistants - Chromecast (Google Home), Lisnr, Trillbit
Web - Cifrasoft, Copsonic, Digimarc, Dov-e, Infosonic(soniccode), Lisnr, Sonarax, Soundpays, ToneTag
Internet streaming - ACRcloud, Alphonso, Cifrasoft, Instreamatic, Signal360, Trillbit
Television broadcast - ACRcloud, Actv8, Alphonso, Cifrasoft, Digimarc, Hotstar, Inscape, Red Brick Lane, Silverpush, Soundpays, Zapr
Peer-to-Peer - Chirp, Cifrasoft, CopSonic, Dov-e, Infosonic (soniccode), Lisnr, Sonarax, ToneTag, Uber

PilferShush Jammer app

Experimental app designed to counter NUHF beacon audio.

The Passive Jammer requests use of the hardware microphone from the Android system and holds it. This technique locks up the microphone from any other apps attempting to gain access to it. The Android system should halt PilferShush Jammer from blocking the microphone whenever a phone call is received or made.

An Active Jammer emits audio tones with a carrier frequency and a drift limit with rate all constrained to NUHF of 18 kHz to 24 kHz depending on the device capabilities. For instance 20000 Hz carrier, drift limit 1000 Hz and rate slow - will output random frequency between 19 kHz and 21 kHz approximately every second.

Also includes a user app summary function that lists relevant capabilities: record audio, boot listen, services, receivers and any known NUHF/ACR SDKs installed. A user app scanner lists any receivers and services for a selected user app.

It requires RECORD_AUDIO permission so that it may access and lock up the microphone from use.
It does NOT record or listen to any audio.
It does NOT connect to the internet.


Available on Google Play :
PilferShush Jammer on Google Play
and also on F-Droid:
PilferShush Jammer on F-Droid
or from the Github repo releases page: PilferShushJammer

Screenshot of Jammer app

PilferShush Jammer app screenshot

Resources

- Electrofringe 18 presentation paper (pdf)
- PilferShush Jammer source code on Github

NUHF audio basics

A basic understanding of near-ultra high frequency audio (NUHF) signals can be achieved by placing these signals within the scale of typical human hearing. The infographic below shows this range and the way an audio engineer would think about these frequencies.
audio spectrum chart of range of human hearing
The right side of the above graphic contains the higher frequencies of interest, from 18k and upwards and it is this range that is essentially impossible to hear. The usual caveat with sound applies: you are more likely to hear it if its very, very loud. CD audio (and wav files, most uncompressed audio files, etc) keep to this range of 20 Hz to 20 kHz as it is most likely to be heard by humans - even the range just referred to as being impossible to hear. In music this beyond hearing range can also be referred to as "air" or "presence": words to describe the parts of a sound not necessarily heard but still felt. Like a lot of things its absence can be more keenly felt than its presence. MP3 audio, apart from compressing the audio based upon psycoacoustic modelling, tends to have a sharp "roll-off" around 13-16 kHz ( more info here ). So a wav or aac file can contain NUHF signals whereas an mp3 will more than likely never contain any.
chart of sound effects at various frequencies
The above chart shows the type of effect various frequencies can have on us. The lower frequencies (like bass guitars and bass drums) are felt strongly. They are longer waveforms, can travel further distances (dependant on the medium) and it can be difficult to pinpoint the source. The higher frequencies, in this chart referred to as "Air", are shorter waveform lengths but can be sourced more easily. The technique of echo-location or sonar ( see here for more info ), such as used by dolphins and bats, relies on the speed and accuracy of higher frequencies.

A beacon can be a device made specfically to output sound at NUHF frequencies using low powered always on circuit boards and other hardware such as speakers. Any loudspeaker for any device or applicance that can play CD quality audio can be considered able to transmit NUHF signals as well. And this includes mobile phones.

These NUHF signals have been captured being broadcast over free to air television, music streaming services, video streaming services, websites, shop doorways and IoT devices embedded in shelving.

ACR audio basics

Audio Content Recognition is a system that generates an identifiable and simple "fingerprint" of a complex audio signal. A simplified example is in the way the Shazaam app listens to a song that is playing and can then identify the title and artist. To do this, the company offering the service creates a database of audio fingerprints generated by processing audio files through a particular algorithm. This algorithm creates a fingerprint based upon key features of the audio as determined by a spectrogram.
side by side comparison of audio spectrums
The left side of the above image shows the initial spectrogram of the audio as a representation of frequencies over time. The right side shows an example of key features being identified, here it is the intensity of a given frequency. From this a small section can then be indexed to provide a hash which can be stored in a database and queried.

Following a similar methodology, several ACR companies have created SDKs that are embedded within smart phone apps for the purpose of tracking what television shows and adverts are watched. The recordings usually last a few seconds (5 - 15 secs) and are processed on the device to generate a "finger print" of the audio. This fingerprint is then sent via a network connection to servers for querying. Most of these companies provide analytics to their clients so that they can assess what adverts are being seen and by whom.

Record audio method

The following is a simplified examination of the method that apps use to record audio. As part of a broader tracking and analytics habit, the SDKs have varying degrees of information about both the user and the device the SDK is installed on. From name, gender and age to other information such as Android advertising id, IMEI, telephony, contacts, hardware statistics and device profiles. Even without if this information is not available to the SDK itself, these companies operate as part of a larger network of comapanies that already gathers it.
      Uri.parse("content://com.android.chrome.browser/bookmarks");
      actionData.setEventProperty("calendar", "1");
      actionData.setEventProperty("contacts", "1");
      actionData.setEventProperty("location", "1");
      actionData.setEventProperty("media", "1");
      actionData.setEventProperty("phone", "1");
      actionData.setEventProperty("sms", "1");

Basic information regarding the user and the device is then augmented by information specific to the operation of the phone. By examining information gathered from the sensors of most typical phones, it is possible to determine whether a phone is being used and how it is being used. For example the accelerometers provide movement and orientation of the device, power status to determine if the phone is being charged and light sensors can provide screen active information.
      ActivityCompat.checkSelfPermission(context, "android.permission.READ_PHONE_STATE")
      IntentFilter.addAction("android.intent.action.USER_PRESENT");
      IntentFilter.addAction("android.intent.action.SCREEN_OFF");
      IntentFilter.addAction("android.intent.action.SCREEN_ON");

Information such as this is used by most SDKs to determine if they should run the record audio process, which can be CPU intensive. Another trigger for recording can be from location information that is derived either from GPS, cellular network or by wifi AP scanning. Also some use the level of sound present at the microphone, or specific times of the day such as during the prime time television hours to wake the record audio process:
      public static final String ACS_EVENING_PRIME_TIME_BEGIN_DEFAULT = "19:00";
      public static final String ACS_EVENING_PRIME_TIME_END_DEFAULT = "22:00";
      public static final String ACS_MORNING_PRIME_TIME_BEGIN_DEFAULT = "06:00";
      public static final String ACS_MORNING_PRIME_TIME_END_DEFAULT = "09:00";

When the recording takes place, as determined by factors such as those listed above, the code in the SDK makes a call to the appropriate Android API function. This typically uses settings similar to CD quality audio (44.1 kHz, 16 bit, mono) where NUHF requires at least the Android default of 44.1 kHz. However ACR can record at lower sample rates for efficiency, such as 8 kHz, due to not needing high quality listenable audio, but just enough to generate key feature fingerprints.
      audioRecord = new AudioRecord(
              audioSettings.getAudioSource(),
              audioSettings.getSampleRate(),
              audioSettings.getChannelInConfig(),
              audioSettings.getEncoding(),
              audioSettings.getBufferInSize());

Then the code makes a request to the operating system for access to the microphone. This access can be denied if some other user installed app or system app such as telephony is using the microphone. If the request is denied then the SDK cannot use the microphone, but if it is allowed then the SDK can start reading from the audio buffer.
      short[] tempBuffer = new short[audioSettings.getBufferSize()];
      do {
        audioRecord.read(tempBuffer, 0, audioSettings.getBufferSize());
      } while (true);

What this read(tempBuffer) call is doing is asking the system to copy whatever audio data is present at the microphone directly to the tempBuffer array. By looking at the source file AudioRecord.cpp, as found on the AOSP repo, it is possible to see how this audio data supply works. It reads audio data from the audio hardware for copying into a buffer.
      ssize_t AudioRecord::read(void* buffer, size_t userSize)
              memcpy(buffer, audioBuffer.i8, bytesRead);
              read += bytesRead;
              return read;

Now the SDK has some audio of n length. What happens next depends on the type of audio the SDK is looking for. The NUHF SDKs examined tend to reduce the amount of noise and unnecessary sounds by applying a filter that effectively silences any audio below the 18 - 24 kHz range. From there the SDK will run various processes that are looking for suitably high amplitude signals of a specific length, frequency, pattern and duration. If an SDK follows this methodology then it is safe to ASSUME that it DOES NOT record any conversations.
      example detection method:
          raw signal (record audio)
          high pass filter (filter all but ~18kHz up)
          matched filter (template of known signal)
          Hilbert transform (to identify candidate "thin" peaks)
          peak selection (for candidate timestamps for beacon signals)
          decode each candidate 
          use Hamming to validate
          discard those candidates with errors

The ACR SDKs work differently as they are interested in the human audible sounds recordable by the smart phone as well as what is referred to as the "ambient" background sounds. After recording the audio the SDK will process the sounds by using the same algorithm that the owning company used to populated the database of reference sounds. One method is to break the recording into small 64 kb (packet size) files of raw audio data that can be imported into an audio program such as Audacity and played back slowed down to reveal the original, legible audio. These files are then uploaded to the owning company's servers for matching with their database. Other SDKs only upload the hashed fingerprint data to their servers, not the audio.
    public static final String KEY_ACR_DB_FILENAME = "acr_db_filename";
    public static final String KEY_ACR_DB_FILE_ABS_PATH = "acr_db_file_abs_path";
    public static final String KEY_ACR_DB_FILE_DIR = "acr_db_file_dir";
    public static final String KEY_ACR_DB_SERVER_NAME = "acr_db_server_name";
    public static final String KEY_ACR_DB_SERVER_PORT = "acr_db_server_port";
    public static final String KEY_ACR_INSECURE_SERVER = "acr_db_insecure_server";
    public static final String KEY_ACS_ACR_MODE = "acr_mode";
    public static final String KEY_ACS_ACR_SHIFT = "acr_shift";
    public static final String KEY_ACS_AUDIO_FILE_UPLOAD_FLAG = "audio_file_upload_flag";



Research

- The Technology of Computer Music (Sound Processing) (6.7 mb pdf)
- A Study of Scripts Accessing Smartphone Sensors (0.8 mb pdf)
- Active Acoustic Side-Channel Attacks (3.4 mb pdf)
- Acoustic Indoor Smart Phone Tracking (3.0 mb pdf)
- Audio and Video Exfiltration from Android Applications (0.5 mb pdf)
- Localization using controlled ambient sounds (0.6 mb pdf)
- Global Study of the Mobile Tracking Ecosystem (2.0 mb pdf)
- Mobile Device Sensor Fingerprinting (1.6 mb pdf)
- Privacy and Security of the Ultrasound Ecosystem (1.6 mb pdf)
- Smartphone Audio Acquisition Using an Acoustic Beacon (3.7 mb pdf)
- Using Smartphones to Collect Behavioral Data in Psychological Science (0.5 mb pdf)
- Third Party Tracking in the Mobile Ecosystem (1.4 mb pdf)
- A 1-million-site Measurement and Analysis (2.5 mb pdf)
- Shazaam audio search algorithm (0.5 mb pdf)

Patents
- UTILIZING AUDIO BEACONING IN AUDIENCE MEASUREMENT (2009) (1.1 mb pdf)
- MATCHING TECHNIQUES FOR CROSS-PLATFORM MONITORING (2010) (1.3 mb pdf)
- BROADCAST CONTENT VIEW BASED ON AMBIENT AUDIO RECORDING (2016) (0.2 mb pdf)
- MODULATE A CODE AND PROVIDE CONTENT TO A USER (2011) (0.5 mb pdf)
- ACOUSTIC MODULATION PROTOCOL (2011) (2.3 mb pdf)
- DEMODULATE A MODULATED CODE PROVIDE CONTENT TO A USER (2013) (0.7 mb pdf)
- MODULATE A CODE AND PROVIDE CONTENT TO A USER (2012) (0.2 mb pdf)
- TRANSMITTING DATA OVER VOICE CHANNEL (2013) (1.5 mb pdf)

IoT Beacon materials
- Sonic Notify user manual (2.5 mb pdf)

ACR materials
- Zapr SDK Developer Guide (0.7 mb pdf)

Audio record analysis

Basic information about how the SDKs code function. They start with a call to the Android/Java API that deals with audio recording and playback. From there with a buffer array full of some audio data, it can then be sent to a native code library that is also installed as part of the SDK. These libraries handle the more CPU intensive work such as sifting through the data using various common methods (Goertzel et al) to find audio signals of interest.

This first section shows some of the Android/Java function calls and parameters used.

alphonso
ALPHONSO_VERSION = "2.0.46";
    private static final int RECORDER_AUDIO_BYTES_PER_SEC = 16000;
    private static final int RECORDER_AUDIO_ENCODING = 2;
    private static final int RECORDER_BIG_BUFFER_MULTIPLIER = 16;
    private static final int RECORDER_CHANNELS = 16;
    private static final int RECORDER_SAMPLERATE_44100 = 44100;
    private static final int RECORDER_SAMPLERATE_8000 = 8000;
    private static final int RECORDER_SMALL_BUFFER_MULTIPLIER = 4;
    public static final byte ACR_SHIFT_186 = (byte) 0;
    public static final byte ACR_SHIFT_93 = (byte) 1;
    public static final int ACR_SPLIT = 2;
bitsound
VERSION_NAME = "v4.2.2"
    public void a(int i) {
      try {
        this.d = new AudioRecord(6, this.b, 16, 2, i);
        if (this.d.getState() == 1) {
          try {
            this.d.startRecording();
            if (this.d.getRecordingState() != 3) {
              b.c(a, "Audio recording startDetection fail");
              this.d.release();
              this.e = false;
              return;
            }
            a(this.d);
            this.e = true;
            return;
cifrasoft
VERSION_NAME = "1.0.3"
    public static final int AUDIO_BUFFER_SIZE_MULTIPLIER = 4;
    public static final int AUDIO_THREAD_STOP_TIMEOUT = 3000;
    public static final int MAX_EMPTY_AUDIO_BUFFER_SEQUENTIAL_READS = 10;
    this.SAMPLE_RATE = 44100;
    private int readAudioData(int currentPcmOffset, byte[] pcm) {
      AudioRecordService.handler.sendEmptyMessageDelayed(1, 3000);
      int result = this.mAudioRecord.read(pcm, currentPcmOffset * 2, this.bufferLength * 2);
      AudioRecordService.handler.removeMessages(1);
      return result;
    }
copsonic
CORE_VERSION = "SonicAuth_CORE_v1.2.2.1";
    "signalType": "ULTRASONIC_TONES",
    "content" : {
        "frequencies" : [ [18000, 20000, "TwoTones"] ]

    "signalType": "ZADOFF_CHU",
    "content": {
      "config": {
        "samplingFreq": 44100,
        "minFreq": 18000,
        "maxFreq": 19850,
        "filterRolloff": 0.5,
        "totalSignalTime": 0.3,
        "nMsgSymbols": 2,
        "filterSpan": 8
      },
      "set": {
        "centralFreq": 18925,
        "nElemSamples": 36,
        "nSymbolElems": 181
dv (dov-e)
VERSION_NAME = "1.1.7"
    private void recorderWork() {
      if (this.recordingActive) {
        int bytesReadNumber = this.myRecorder.read(this.myBuffer, 0, this.myBuffer.length);
        if (this.recordingActive) {
          DVSDK.getInstance().DVCRxAudioSamplesProcessEvent(this.myBuffer, 0, bytesReadNumber / 2);
        }
      }
    }
fanpictor
VERSION_NAME = "3.2.3"
    enum FNPFrequencyBand {
      Default,
      Low,
      High
    }
              
fidzup
    a. this.frequency = paramBasicAudioAnalyzerConfig.frequency;   // 19000.0f
    b. this.samplingFrequency = paramBasicAudioAnalyzerConfig.samplingRate;    // 44100.0f
    c. this.windowSize = paramBasicAudioAnalyzerConfig.windowSize;   // 0x200 (512)
    d. /* pulseDuration = 69.66f */
    e. this.pulseWidth = Math.round(paramBasicAudioAnalyzerConfig.pulseDuration * (this.samplingFrequency / 1000.0F));
    f. this.pulseRatio = paramBasicAudioAnalyzerConfig.pulseRatio;   // 32.0f
    /* signalSize = 0x20 (32)
    g. this.signalPeriodPulses = paramBasicAudioAnalyzerConfig.signalSize;
    h. this.bitCounts = paramBasicAudioAnalyzerConfig.bitcounts;   // 0xb (11)
    paramf.a = 19000.0F;            
    paramf.b = 44100.0F;            
    paramf.c = 512;                 
    paramf.d = 69.66F;              
    paramf.e = 0.33333334F;         
    paramf.f = ((int)(paramf.d * 32.0F * 3.2F)); // 7133.184
    paramf.g = 32;                 
    paramf.h = new int[] { 15, 17, 19, 13, 11, 21, 23, 9, 7, 25, 27 };
fluzo
VERSION = "1.3.001"
    this.p = jSONObject.getInt("frame_length_milliseconds");
    this.q = jSONObject.getInt("frame_step_milliseconds");
    this.r = (float) jSONObject.getDouble("preemphasis_coefficient");
    this.s = jSONObject.getInt("num_filters");
    this.t = jSONObject.getInt("num_coefficients");
    this.u = jSONObject.getInt("derivative_window_size");
instreamatic
VERSION_NAME = "7.16.0"
    private static final int BUFFER_SECONDS = 5;
    private static int DESIRED_SAMPLE_RATE = 16000;
lisnr
VERSION_NAME = "5.0.1.1";
    // LisnrIDTone          
    public long calculateToneDuration() {
        return ((long) (((double) (this.lastIteration + 1)) * 2.72d)) * 1000;
    }
    // LisnrTextTone
    public long calculateToneDuration() {
        return (long) (((this.text.length() * 6) * 40) + 1280);
    }
    // LisnrDataTone
    public long calculateToneDuration() {
        return (long) (((this.data.length * 6) * 40) + 1280);
    }
    AudioRecord audioRecord = new AudioRecord(0, d, 16, 2, 131072);
    ArrayAudioPlayer.this.audioOutput = new AudioTrack(3, ArrayAudioPlayer.this.samplerate, 4, 2, 16000, 1);
    ArrayAudioPlayer.this.audioOutput.play();
    int written = 0;
    while (!ArrayAudioPlayer.this.threadShouldStop) {
      try {
        if (ArrayAudioPlayer.this.buffer.getBufferLeftToRead() > 0) {
          int size = ArrayAudioPlayer.this.buffer.getBufferLeftToRead();
          written += size;
          ArrayAudioPlayer.this.audioOutput.write(ArrayAudioPlayer.this.buffer.readFromBuffer(size), 0, size);
          } else {
            ArrayAudioPlayer.this.threadShouldStop = true;
          }
        } catch (IOException e) {
          e.printStackTrace();
        }
moodmedia
getVersion() = "1.2.1";
    b = new AudioRecord(5, 44100, 16, 2, Math.max(AudioRecord.getMinBufferSize(44100, 16, 2) * 4, 32768));
    this.b = Type.SONIC;
    this.b = Type.ULTRASONIC;
    if (num.intValue() == 44100 || num.intValue() == 48000)
    this.j.setName("Demodulator");
    this.k.setName("Decoder");
    this.l.setName("HitCounter");
              
prontoly (sonarax)
VERSION_NAME = "4.2.0";
    contentValues.put("time", cVar.a);
    contentValues.put("type", cVar.b.name());
    contentValues.put(NotificationCompat.CATEGORY_EVENT, cVar.c);
    contentValues.put("communication_type", cVar.d);
    contentValues.put("sample_rate", cVar.e);
    contentValues.put("range_mode", cVar.f);
    contentValues.put("data", cVar.g);
    contentValues.put("duration", cVar.h);
    contentValues.put("count", cVar.i);
    contentValues.put("volume", cVar.j);
realitymine
getSdkVersion = "5.1.6";
    this.e = AudioRecord.getMinBufferSize(44100, 16, 2);
    int i = this.e;
    this.d = new byte[i];
    this.c = new AudioRecord(1, 44100, 16, 2, i);
redbricklane (zapr)
SDK_VERSION = "3.3.0";
    AudioRecord localAudioRecord = new AudioRecord(1, 8000, 16, 2, 122880);
    if (localAudioRecord.getState() == 1) {
      this.logger.write_log("Recorder initialized", "finger_print_manager");
      this.logger.write_log("Recording started", "finger_print_manager");
      localAudioRecord.startRecording();
runacr
release = "1.0.4"
    int minBufferSize = AudioRecord.getMinBufferSize(11025, 16, 2);
    this.K = new AudioRecord(6, 11025, 16, 2, minBufferSize * 10);
shopkick
    .field bitDetectThreshold:Ljava/lang/Double;
    .field carrierThreshold:Ljava/lang/Double;
    .field detectThreshold:Ljava/lang/Double;
    .field frFactors:Ljava/lang/String;
    .field gapInSamplesBetweenLowFreqAndCalibration:Ljava/lang/Integer;
    .field maxFracOfAvgForOne:Ljava/lang/Double;
    .field maxIntermediates:Ljava/lang/Integer;
    .field minCarriers:Ljava/lang/Integer;
    .field noiseThreshold:Ljava/lang/Double;
    .field numPrefixBitsRequired:Ljava/lang/Integer;
    .field numSamplesToCalibrateWith:Ljava/lang/Integer;
    .field presenceDetectMinBits:Ljava/lang/Integer;
    .field presenceNarrowBandDetectThreshold:Ljava/lang/Double;
    .field presenceStrengthRatioThreshold:Ljava/lang/Double;
    .field presenceWideBandDetectThreshold:Ljava/lang/Double;
    .field useErrorCorrection:Ljava/lang/Boolean;
    .field wideBandPresenceDetectEnabled:Ljava/lang/Boolean;
    .field highPassFilterType:Ljava/lang/Integer;
    Java_com_shopkick_app_presence_NativePresencePipeline_setDopplerCorrectionEnabledParam
    Java_com_shopkick_app_presence_NativePresencePipeline_setHighPassFilterEnabledParam
    Java_com_shopkick_app_presence_NativePresencePipeline_setWideBandDetectEnabledParam
    Java_com_shopkick_app_presence_NativePresencePipeline_setNumPrefixBitsRequiredParam
    Java_com_shopkick_app_presence_NativePresencePipeline_setPresenceDetectNarrowBandDetectThresholdFCParam
    Java_com_shopkick_app_presence_NativePresencePipeline_setGapInSamplesBtwLowFreqAndCalibFCParam
    Java_com_shopkick_app_presence_NativePresencePipeline_setCarrierThresholdFCParam
    Java_com_shopkick_app_presence_NativePresencePipeline_setHighPassFilterTypeHPFParam
signal360 (sonic notify)
VERSION_NAME = "4.90.123";
    private static final int BUFFER_SIZE = 131072;
    private static final int FREQ_STEPS = 128;
    private static final int PAYLOAD_LENGTH = 48;
    private static final int READ_BUFFER_SIZE = 16384;
    public static final int SAMPPERSEC = 44100;
    private static final int STEP_SIZE = 256;
    private static final int TIMESTEPS_PER_CHUNK = 64;
    private static final int USABLE_LENGTH = 256;
silverpush
String d = "1.1.0";
    for (int i : new int[]{4096, 8192}) {
      AudioRecord audioRecord = new AudioRecord(1, 44100, 16, 2, i);
      if (audioRecord.getState() == 1) {
        audioRecord.release();
        return i;
      }
    }
soniccode
getVersion() { return "2.2"; }
    // player
    float[] decodeLocationFloat = decodeLocationFloat(str);
    AudioTrack audioTrack = new AudioTrack(8, 44100, 2, 2, AudioTrack.getMinBufferSize(44100, 2, 2), 1);
    this.audioGenerator = new STAudioGenerator();
    this.audioGenerator.setAudioTrack(audioTrack);
    this.audioGenerator.setAmplitude(this._amplitude);
    this.audioGenerator.setBlockTime(this._blockTime);
    this.audioGenerator.setFrequencies(decodeLocationFloat);
soundpays
SDK_VERSION = "2.0"
    this.a.a(18400.0f, 20000.0f);
    this.a.a(new float[]{18475.0f, 18550.0f, 18625.0f, 18700.0f, 18775.0f, 18850.0f, 18925.0f, 19000.0f, 19075.0f, 19150.0f, 19225.0f, 19300.0f, 19375.0f, 19450.0f, 19525.0f, 19600.0f, 19675.0f, 19750.0f, 19825.0f, 19900.0f});
tonetag
String f = "2.1.3"
    if (stringBuilder2.matches("^[0-9]{1,5}[.][0-9]{1,2}$"))
    this.aY = new AudioRecord(1, 44100, 16, 2, 50000);
    private static native void initRecordingNative(int i, int i2, int i3, String str);
    private static native void initRecordingUltraToneNative(int i, int i2, String str);
    private native void processUltraFreqsNative(double[] dArr, String str);
    _8KHZ(40),
    _10KHZ(50),
    _12KHZ(60),
    _14KHZ(70),
    _16KHZ(80),
    _18KHZ(90);              
trillbit
VERSION_NAME = "1.0";
    private static final int SAMPLE_RATE = 44100;
    private static final String TAG = "AutoToneDetectorClass";
    private static final int WINDOW_SIZE = 4096;
    
    private static final int CHUNKS_AFTER_TRIGGER = 23;
    private static final int CHUNKS_BEFORE_TRIGGER = 2;
    private static final int CHUNK_SIZE = 4096;
   
    this.recorder = new AudioRecord(this.AUDIO_SOURCE, SAMPLE_RATE, 16, 2, 4096);
    int recorderState = this.recorder.getState();
    Log.i(TAG, "RECORDER STATE : " + String.valueOf(recorderState));
    if (recorderState == 1) {
      try {
        this.recorder.startRecording();
        Log.i(TAG, "Recording Started");
        startAsyncTasks();
      } catch (Exception e) {
        e.printStackTrace();
      }
    } 
    try {
      data = new DataPart("temp.mp3", UploadToServer.this.getDatafromFile(str));
    } catch (IOException e) {
      e.printStackTrace();
    }
    params.put("mp3file", data);
    return params; 
    Log.d("V2", "Sending Information to backend");
    js.put("Device", Build.MANUFACTURER + " " + Build.MODEL);
    js.put("audio_chunks", jsonArr);
    js.put("MIC_SRC", "MIC");
    js.put("FREQ_PLAYED", "3730");
    js.put("MIN_BUFFER", "4096");
       
    int[] original = new int[]{5, 5, 1, 2, 3, 4};

Process audio analysis

Notes from native library analysis.

Audio analysis using discrete wavelet (Haar) transform.
pdf publication (75kB)
Extracted wavelet coefficients provide a compact representation that shows the energy distribution of the signal in time and frequency.

A window size of 65536 samples at 22050 Hz sampling rate with hop size of 512 seconds is used as input to the feature extraction. This corresponds to approximately 3 seconds. Twelve levels (subbands) of coefficients are used resulting in a feature vector with 45 dimensions (12 + 12 + 11).

Using wavelets to remove noise from the signal requires identifying which part of component contains noise and then reconstructing the signal without those components.

Phase-shift keying (PSK) is a digital modulation process which conveys data by changing (modulating) the (one of two) phase(s) of a reference signal (the carrier wave). The modulation occurs by varying the sine and cosine inputs at a precise time. It is widely used for wireless LANs, RFID and Bluetooth communication.

The Scientist and Engineer's Guide to Digital Signal Processing
The Goertzel Algorithm


Alphonso
libacr.so
.string "1.4.10"

libas.so


Bitsound (Soundlly)
libdecoder.so
    jSONObject2.put("edTy", d);
    jSONObject2.put("edSNRdB", d2);
    jSONObject2.put("edTyLower", d3);
    jSONObject2.put("edSNRdBLower", d4);
    jSONObject2.put("nuclearNormLower", d5);
    jSONObject2.put("edTyUpper", d6);
    jSONObject2.put("edSNRdBUpper", d7);
    jSONObject2.put("nuclearNormUpper", d8);
    if (this.p == 2) {
      jSONObject2.put("edPass", z ? 1 : 0);
      jSONObject2.put("edEnergyRatioArraydB", jSONArray);
     }
cifrasoft
libac_rx.so
libac_tx.so
libsl.so
libhsscl.so (actv8)


Copsonic
liboffline-sound-detector-wrapper.so
Appear to do most of the processing using java code.
    private static double[] filter(double[] xx) {
      int n = xx.length;
      double[] x = new double[n];
      double[] b = new double[]{0.0711d, -0.1422d, 0.0711d};
      double[] a = new double[]{1.0d, 1.1173d, 0.4016d};
      x[0] = b[0] * xx[0];
      x[1] = ((b[0] * xx[1]) + (b[1] * xx[0])) - (a[1] * x[0]);
      for (int i = 2; i < n; i++) {
        x[i] = ((((b[0] * xx[i]) + (b[1] * xx[i - 1])) + (b[2] * xx[i - 2])) - (a[1] * x[i - 1])) - (a[2] * x[i - 2]);
      }
      return x;
    }
    public static boolean isUltrasoundDetected(short[] aSamples) {
      if ($assertionsDisabled || aSamples.length >= 2205) {
        int i;
        double[] x = filter(normalise(Arrays.copyOfRange(aSamples, 0, 2205)));
        double[] y = new double[2205];
        int zeroIndex = 1102 + 1;
        double[] f = new double[zeroIndex];
        for (i = 0; i < zeroIndex; i++) {
          f[i] = (((double) 22050) * (((double) (i + i)) + 0.0d)) / ((double) 2205);
        }
        FFT.fft(x, y);
        for (i = 0; i < 2205; i++) {
          x[i] = (2.0d * ((x[i] * x[i]) + (y[i] * y[i]))) / ((double) 4862025);
        }
        double sum = 0.0d;
        double max = Double.MIN_VALUE;
        for (i = 0; i < zeroIndex; i++) {
          max = Math.max(max, x[(i + 1103) - 1]);
          if (f[i] >= 18000.0d) {
            sum += x[(i + 1103) - 1];
          }
        }
        if ((sum / ((double) zeroIndex)) / max < fThreshold) {
          return true;
        }
        return false;
      }
      throw new AssertionError();
    }
DV (Dov-e)
libdvsdk.so

fanpictor
libFanpictorDSP.so

Fidzup
Appear to do most of the processing using java code.
    private void processAudio(short[] buffer, int nbOfShorts) {
      int count = 0;
      int bufIndex = 0;
      while (bufIndex < nbOfShorts) {
        count = Math.min(nbOfShorts - bufIndex, this.samplesBufCountdown);
        this.goertzel.processShorts(buffer, bufIndex, count);
        this.samplesBufCountdown -= count;
        if (this.samplesBufCountdown <= 0) {
          if (this.goertzel.sampleCount >= this.goertzel.sampleCountMax) {
            addMagnitude((float) this.goertzel.getMagnitude());
          }
          this.goertzel.init();
          if (this.samplesBufSavedCount > 0) {
            this.goertzel.processShorts(this.samplesBuf, 0, this.samplesBufSavedCount);
            this.samplesBufSavedCount = 0;
          }
          this.goertzel.processShorts(buffer, bufIndex, count);
          this.samplesBufCountdown = (int) (((float) this.windowSize) * 0.5f);
        }
        bufIndex += count;
      }
      if (this.samplesBufCountdown != ((int) (((float) this.windowSize) * 0.5f))) {
        System.arraycopy(buffer, bufIndex - count, this.samplesBuf, 0, count);
        this.samplesBufSavedCount = count;
      }
    }
Fluzo
libfluzo.so
    public static b a() {
      b bVar = new b();
      bVar.a = 16000;
      bVar.b = 300;
      bVar.c = 25;
      bVar.d = 0.97f;
      bVar.e = 40;
      bVar.f = 25;
      bVar.g = 5;
      return bVar;
    }
Hotstar
libtransformdatajni.so

Lisnr
libhflat.so
    public native byte[] assembleDataPacket(byte[] bArr);
    public native byte[] assembleTextPacket(char[] cArr);
    public native int measureModulatedSamples(byte[] bArr, int i);
    public native double modulatePacketBytes(byte[] bArr, byte[] bArr2, int i, double d);
    public native void processPcmData(short[] sArr, int i);
Moodmedia
libmp.so

Prontoly/Sonarax
libSonaraxNativeSDK-4.2.so
    public static final DataRangeMode DEFAULT_DATA_RANGE_MODE = DataRangeMode.LONG;
    public static final SymbolRangeMode DEFAULT_SYMBOL_RANGE_MODE = SymbolRangeMode.LARGE;
    public static final float DEFAULT_VOLUME = 0.75f;
    public static final int FOREVER = -1;
    public static final float IMMUTABLE_VOLUME = -1.0f;
    private static DataRangeMode e = DEFAULT_DATA_RANGE_MODE;
    private static SymbolRangeMode f = DEFAULT_SYMBOL_RANGE_MODE;
    private static Channel g = Channel.CHANNEL_ONE;
RealityMine
libsoxr.so


Redbricklane/Zapr
libzaprdatajni.so
    public static final int MAX_FILES_UPLOAD = 10;
    private static final int RECORDER_AUDIO_ENCODING = 2;
    private static final int RECORDER_CHANNELS = 16;
    private static final int RECORDER_SAMPLERATE = 8000;
    private int MAX_UPLOAD_FAILURE_COUNT = 3;
runacr
librunacr.so

Shopkick
libpresence.so
    params.bitDetectThreshold = flags.pdFreqCodingBitDetectThreshold;
    params.gapInSamplesBetweenLowFreqAndCalibration = flags.pdFreqCodingGapInSamplesBetweenLowFreqAndCalibration;
    params.maxFracOfAvgForOne = flags.pdFreqCodingMaxFreqOfAvgForOne;
    params.numSamplesToCalibrateWith = flags.pdFreqCodingNumSamplesToCalibrateWith;
    params.presenceDetectMinBits = flags.pdFreqCodingPresenceDetectMinBits;
    params.presenceNarrowBandDetectThreshold = flags.pdFreqCodingPresenceNarrowBandDetectThreshold;
    params.presenceStrengthRatioThreshold = flags.pdFreqCodingPresenceStrengthRatioThreshold;
    params.presenceWideBandDetectThreshold = flags.pdFreqCodingPresenceWideBandDetectThreshold;
    params.wideBandPresenceDetectEnabled = flags.pdWideBandDetectEnabled;
    params.useErrorCorrection = flags.pdFreqCodingUseErrorCorrection;
    params.frFactors = flags.pdFreqCodingFrFactors;
    params.numPrefixBitsRequired = flags.pdPrefixBitsRequired;
    params.minCarriers = flags.pdFreqCodingMinCarriers;
    params.maxIntermediates = flags.pdFreqCodingMaxIntermediate;
    params.carrierThreshold = flags.pdFreqCodingCarrierThreshold;
    params.detectThreshold = flags.pdFreqCodingDetectThreshold;
    params.noiseThreshold = flags.pdFreqCodingNoiseThreshold;
Signal360/sonic notify
libsn.so
    public long processBuffer(int byteCount) {
      int stepCount = byteCount / 256;
      long code = 0;
      for (int ii = 0; ii < stepCount; ii++) {
        int offset = ii * 256;
        for (int q = 0; q < 256; q++) {
          this.mSamples[q] = this.mData[offset + q];
        }
        this.mSamplesBuffer.position(0);
        this.mSamplesBuffer.put(this.mSamples);
        long tempCode = processSamplesAtTime(this.mSamplesBuffer, this.base_timestamp + bytes2millis(offset * 2));
        if (tempCode > 0) {
          if (this.mService.useCustomPayload()) {
            int customPayload = getCustomPayload();
            Log.d(TAG, "Heard signal " + tempCode + " customPayload " + customPayload);
            code = tempCode;
            this.mService.heardCode(new SignalAudioCodeHeard(code, null, Integer.valueOf(customPayload)));
          } else {
            long timeInterval = getTimeIntervalRel(System.currentTimeMillis());
            Log.d(TAG, "Heard signal " + tempCode + " timeInterval " + timeInterval);
            code = tempCode;
            this.mService.heardCode(new SignalAudioCodeHeard(code, Long.valueOf(timeInterval), null));
          }
        }
      }
      return code;
    }
Silverpush
Appear to do most of the processing using java code.
    int i3 = 18000;
    i = 0;
    double d = 0.0d;
    while (i3 < 20000) {
    f fVar = new f((float) this.g, (float) i3, dArr);
    fVar.a();
    double c = fVar.c();
    if (c > d) {
      i = i3;
      d = c;
    }
    int i4 = c > ((double) a.c) ? i2 + 1 : i2;
    if (i4 > 1) {
      a.c += 3000;
      i3 = i;
      i = i4;
      if (i == 1 && r4 > ((double) a.c)) {
        publishProgress(new Integer[]{Integer.valueOf(i3)});
        return;
      }
    } 
Soniccode
libsoniccode-lib.so

Soundpays
libAudioFFT.so

ToneTag
libndklib.so
    public static native void initFFTUltraTones(int i);
    public native void FFTMusicalTones(double[] dArr, double[] dArr2);
    public native void FFTMusicalTones16Byte(double[] dArr, double[] dArr2);
    public native void FFTNormal(double[] dArr, double[] dArr2);
    public native void FFTUltraTones(double[] dArr, double[] dArr2);
    public native void initFFTMusicalTones(int i);
    public native void initFFTMusicalTones16Byte(int i); 


Trillbit
libnative-lib.so
libtrillBPP.so
    private void decodeAudio() {
      this.finalAudioChunks = get_audio_chunks_from_array(this.audioChunks, 4096);
      this.start_point_1[0][0] = 15000;
      this.start_point_1[0][1] = 25000;
      this.start_point_2[0][0] = 40000;
      this.start_point_2[0][1] = 30000;
      this.start_point_3[0][0] = 70000;
      this.start_point_3[0][1] = 25000; 
     
      this.finalAudioChunks = get_audio_chunks_from_array(this.audioChunks, 4096);
      this.start_point_1[0][0] = 0;
      this.start_point_1[0][1] = 30000;
      this.start_point_2[0][0] = 30000;
      this.start_point_2[0][1] = 30000;
      this.start_point_3[0][0] = 60000;
      this.start_point_3[0][1] = 30000;


Uber
libstmf-modem.so
    STMFPreambleFreqIndex {
      _17P00KHZ,
      _17P25KHZ,
      _17P50KHZ,
      _17P75KHZ,
      _18P00KHZ,
      _18P25KHZ,
      _18P50KHZ,
      _18P75KHZ
    }

Network signatures

ACR Cloud
http://api.acrcloud.com/v1/devices/login
identify-ap-southeast-1.acrcloud.com


alphonso
acrdb.alphonso.tv
http://tkacr254.alphonso.tv:4432/v5/audio/buffer
http://tkacr254.alphonso.tv:4432/v5/audio/fingerprint
http://tkacr263.alphonso.tv
Server Domain set as: http://tkacr187.alphonso.tv
Server Port set as: 4432
http://prov.alphonso.tv
Prov Server Port set as: 4000


bitsound (soundlly)
https://apis.soundl.ly/v4/contents
https://apis.soundl.ly/v4
BASE_INIT_URL = "https://apis.soundl.ly/v1";
BASE_API_URL = "https://apis.soundl.ly/v3";
BASE_AUTH_URL = "https://apis.soundl.ly/apps";
BASE_LOG_URL = "https://logs.soundl.ly/v2/log";
https://s3-ap-northeast-1.amazonaws.com/bitsound.core.param
https://s3-ap-northeast-1.amazonaws.com/bitsound.sdk.schedule


cifrasoft (actv8)
http://search.tele.fm
http://search2.tele.fm
https://api-production-v4.actv8technologies.com/
http://sonar.actv8technologies.com/fdb/
http://mobiimedia.com/


fidzup
http://api.spotinstore.com/api/v2/mobile/zones/


fluzo
https://platform.fluzo.com/audience/
https://platform.fluzo.com/settings
https://match.fluzo.com


hotstar
https://hb-analytics.hotstar.com/viewers/registration
https://fp-analytics.hotstar.com/submit_sample_live
https://hb-analytics.hotstar.com/livesync
https://fp-analytics.hotstar.com/submit_sample
https://fp-analytics.hotstar.com/timesync
https://events-analytics.hotstar.com/debug
https://events-analytics.hotstar.com/crash
https://bifrost.hotstar.com
https://us.hotstar.com/
http://img.hotstar.com/image/upload/
https://services.hotstar.com/
https://api.hotstar.com/
http://cape.hotstar.com/api/v1/trays/
https://service-intl.hotstar.com/prod/
https://persona.hotstar.com/
https://sportzsdk.hotstar.com/


lisnr
https://api.lisnr.com/api/v1/


moodmedia
staging.moodpresence.com
api.moodpresence.com

prontoly/sonarax
http://40.117.230.177:1337/user/login/
http://40.117.230.177:1337/registration/mobile/


redbricklane (zapr)
https://appmm.zapr.in/viewers/registration
VIDEO_AD_SERVER_BASE_URL = "https://asg.zapr.in/zapr";
BASE_URL_APPMM_ZAPR = "https://appmm.zapr.in/";
BASE_URL_SUBMIT_ZAPR = "http://submit.zapr.in/";
LIVESYNC_TEST_URL = "http://appmm.zapr.in/livesync";
http://sdkevents.zapr.in/debug
http://sdkevents.zapr.in/crash
https://asg.zapr.in/zapr


shopkick
DEFAULT_AUTH_DOMAIN = "sdk.shopkick.com";
beta.shopkick.com
app.shopkick.com


signal360/sonic notify
server.url=https://content.signal360.com
https://content.sonicnotify.com


silverpush
http://54.243.149.109:8040/register
http://54.243.149.109:8086/receiver
http://dev.prism.silverpush.co/#!/" + currentTime + "/liveads";

soniccode
https://shop.soniccode.net


tonetag
https://capp.tonetag.com/api/v2/enterprises/notify_lane
https://capp.tonetag.com/api/v1/sdk_users
cappstage.tonetag.com


trillbit
http://13.229.229.48:8091/v1/
https://api.trillbit.com/v1/
https://devapi.trillbit.com/v1/
CALIBRATION_URL = "user/mic/calibration/"
https://api.trillbit.com/client/v2/camera_offers

Data sets

lists as derived from SDK only (PII as device and user ID etc)
n.b MCC - Mobile Country Code. MNC - Mobile Network Code List
also LAC - Location Area Code, CID - CellID, PSC list

alphonso
"facebook_uid"
"facebook_login"
"android_id"
"device_id"
"device_maker"
"device_name"
"os_version"
"uuid"
"latitude"
"longitude"
"altitude"
"speed"
"bearing"
"accuracy"
getTimeZoneOffsetInMinutesFromUTC()
getDefault().inDaylightTime(new Date())
getNetworkCountryIso()
getTMCountryCode(cxt)

bitsound
TimeZone.getTimeZone("UTC")
"adid"
"user_id"
"phoneModel"
"osVer"
telephonyManager.getNetworkOperatorName()
"battery"
"charging"

fidzup
NA

fluzo
NA

hotstar
"android.intent.action.BOOT_COMPLETED"
"android.intent.action.ACTION_POWER_CONNECTED"
"android.intent.action.BATTERY_CHANGED"
getDeviceId()
getLatitude()
getLongitude()
getAccuracy()
getCellLocation()
"imei : "
"countrycode: "
"timezone: "
getNetworkType()
"android_id"
"gender"
"birthday"
"Network Operator Name : "
"\nSim Operator Name : "
"MNC/MCC: "
"LAC: ", " CID: ", " PSC: "
"device ABI's"
"advertiser id = "
      public enum DEVICE_INFO_TYPE {
        OS,
        OS_VERSION,
        MAKE,
        MODEL,
        SCREEN_WIDTH,
        SCREEN_HEIGHT,
        LANGUAGE,
        HARDWARE_VERSION,
        PPI_SCREEN_DENSITY,
        APP_BUNDLE,
        APP_NAME,
        APP_VERSION,
        CARRIER,
        CONNECTION_TYPE,
        ORIENTATION
      } 
lisnr
advertisingId
userId
android.net.conn.CONNECTIVITY_CHANGE"
"android.intent.action.SCREEN_ON"
"android.intent.action.SCREEN_OFF"
"android.intent.action.PHONE_STATE"

prontoly/sonarax
getActiveNetworkInfo()
"username"
"uuid"
"os"
"model"
"userID"

redbricklane/zapr
    DATASET_ADVT_ID = "advertising_id";
    DATASET_APP_USAGE_FREQ = "app_use_freq";
    DATASET_CONNECTIVITY_TYPE = "network_type";
    DATASET_COUNTRY = "country_code";
    DATASET_DATA_RECEIVED_BYTES = "recieved_bytes";
    DATASET_DATA_RECEIVED_BYTES_MOBILE = "recieved_mobile_bytes";
    DATASET_DATA_RECEIVED_BYTES_WIFI = "recieved_wifi_bytes";
    DATASET_DATA_TRANSMITTED_BYTES = "transmitted_bytes";
    DATASET_DATA_TRANSMITTED_BYTES_MOBILE = "transmitted_mobile_bytes";
    DATASET_DATA_TRANSMITTED_BYTES_WIFI = "transmitted_wifi_bytes";
    DATASET_DEVICE_LANG = "device_lang";
    DATASET_DEVICE_STORAGE_MEMORY_FREE = "storage_available_size_mb";
    DATASET_DEVICE_STORAGE_MEMORY_TOTAL = "storage_total_size_mb";
    DATASET_DEVICE_UPTIME = "up_time";
    DATASET_IMEI = "imei";
    DATASET_INSTALLED_APPS = "installed_packages";
    DATASET_LOCATION_LAT_LONG = "location";
    DATASET_NUMBER_OF_PHOTOS = "photos_count";
    DATASET_OPERATOR_CELL_LOCATION = "operator_cell_location";
    DATASET_OPERATOR_CIRCLE = "operator_circle";
    DATASET_OPERATOR_NAME = "operator";
PREF_BATTERY_CHECK = "batteryCheck";
PREF_BATTERY_LEVEL = "battery_level";
PREF_CHARGING = "charging";
getOperatorMCC(Context context)
getOperatorMNC(Context context)
getSupportedABIs()
"deviceId"
"ifa"
"model"
"osVer"
"country"
    jSONObject2.put("bundle", this.q);
    jSONObject2.put("androidId", this.v);
    jSONObject2.put("advtId", this.u);
    jSONObject2.put("advtLmt", this.w);
    jSONObject2.put("sdkVer", this.t);
    jSONObject2.put("appName", this.s);
    jSONObject2.put("appVer", this.r);
    jSONObject2.put("osForkName", this.x);
    jSONObject2.put("devHwv", this.A);
    jSONObject2.put("devMake", this.z);
    jSONObject2.put("devModel", this.y);
    jSONObject2.put("apiLevel", this.D);
    jSONObject2.put("isRooted", this.I);
    jSONObject2.put("hasAudioPerm", this.E);
    jSONObject2.put("hasLocPerm", this.F);
    jSONObject2.put("hasPhStatePerm", this.H);
    jSONObject2.put("hasStoragePerm", this.G);
    jSONObject2.put("devDensityPpi", this.B);
    jSONObject2.put("osForkVer", this.C);
"/system/app/Super user.apk", "/sbin/su", "/system/bin/su", "/system/xbin/su", "/data/local/xbin/su", "/data/local/bin/su", "/system/sd/xbin/su", "/system/bin/failsafe/su", "/data/local/su", "/su/bin/su"
"devOrientation"
"locAge"
"locAcc"
"zapr_loc_timestamp",
"imptrackers"
"clicktrackers"
" Ad Clicked"
"gender"
"yob"
"language"
"android.intent.action.SCREEN_ON"
"android.intent.action.SCREEN_OFF"
"currency"
"ClickTracking"
"current location in video:"

shopkick
"isEmailVerified"
"isFacebookConnected"
"isPhoneVerified"
"phoneNumber"
"zipCode"
userInfo.firstName
userInfo.lastName
userInfo.country
AgeVerificationController.getAgeFromBirthday
FEMALE = 2
MALE = 1
RATHER_NOT_SAY = 0
getDeviceInfoSimCountry()
"latitude"
"longitude"
"location_name"
"location_address"
"dwell_time"
"location_loiter_radius_m"
"location_loiter_time_ms"

signal360/sonic notify
"longitude"
"latitude"
Build.BRAND
Build.MANUFACTURER
Build.MODEL
getAdvertisingIdentifier()
PREF_ADVANCED_TARGETING_SEX, "male"
PREF_ADVANCED_TARGETING_AGE, "25-36"

silverpush
"Operating System : "
"OS Version : "
"Make : "
"Model : "
"Latitude : "
"Longitude : "
"Country : "
"App Name : "
"IMEI : "
"Android ID : "
"Language : "
"MAC Address : "
"Connection Type : "

soniccode
NA

tonetag
new IntentFilter("android.net.wifi.SCAN_RESULTS")
android.permission.ACCESS_COARSE_LOCATION
f.a(c, "Screen Is ON");
f.a(c, "Screen Is OFF");
f.a(c, "User Present");

trillbit
"Device", Build.MANUFACTURER + " " + Build.MODEL
private int age;
private int fav;
private String name;
private double latitude;
private double longitude;
"user_email"

Adtech network

Advertising technology network Example of advertising technology companies network
2018 version of Lumascape for comparison technet-2018.png
2020 version of Lumascape for comparison technet-2020.png

Retail IoT company network
Example of retail IoT technology companies network.

Mobile marketing company network
Example of mobile marketing technology companies network.

IoT Beacons

Small IoT beacon with NUHF audio capabilities
Small IoT beacon with NUHF audio capabilities.

Sonic Notify IoT beacon with NUHF audio capabilities
Sonic Notify IoT beacon with NUHF audio capabilities.

Range of Sonic Notify IoT beacons with NUHF audio capabilities
Range of Sonic Notify IoT beacons with NUHF audio capabilities.

Shopkick mains powered IoT beacon with NUHF audio capabilities
Shopkick mains powered IoT beacon with NUHF audio capabilities.

Fidzup Fidbox IoT beacon with NUHF audio capabilities
Fidzup Fidbox IoT beacon with NUHF audio capabilities.

ToneTag audio pod with NUHF audio capabilities
ToneTag audio pod with NUHF audio capabilities.

NUHF signals

The majority of the NUHF signals pictured in the section below have the frequencies (in Hertz) on the left hand side and the colours refer to the relative amplitude of the sounds.
The patterns of these signals can also enable understanding on how they work: a repeating pattern can be useful when the environment the signal exisis in is noisy and may contain sounds that cover these frequencies. A piece of software may elect to listen for several repeated signals as a way of ensuring that it has in fact heard a signal of interest.

Some of these signals consist of simple pulses of just a few different frequencies. In these cases a particular frequency can represent, for instance, a letter of the alphabet. Just a few letter combinations would allow a lot of unique signals.

More sophisticated signals consist of many, short pulses of certain frequencies. In these cases the difference between two key frequencies can be enough to effect a binary code of 1s and 0s. A lot of information can be transmitted over a relatively short period of time, ie 32 bits in 320 milliseconds.

Transmission of NUHF signals can be thought of as similar to the transmission of packets of data over WiFi where transmitting a signal over a medium (air) requires some error correction. For example a signal consisting of a total of 16 bits, may have 11 data bits, 4 parity bits and 1 overall parity bit. This can help ensure that the signal received is an actual signal and not just some random noise or competing or spoof signal.

Another more advanced technique used by several audio beacon companies is to add doppler shift measurements to allow the tracking of people arriving in or exiting the range (geo-fence) of the audio beacons transmissions.

Silverpush television signal capture
Silverpush television broadcast signal capture.

Silverpush television advert signal capture
Silverpush television advert signal capture.

Copsonic demonstration signal capture
Copsonic demonstration signal capture.

Lisnr test signal capture
Lisnr demonstration signal capture.

Shopkick multiple store comparison signal captures
Shopkick multiple store comparison signal captures.

Signal360 television streaming signal capture
Signal360 television streaming signal capture.

Sonarax demonstration signal capture
Sonarax demonstration signal capture.

Sonarax demonstration signal jammed using the PilferShush Jammer app (19000 Hz carrier, 1000 Hz drift, Speed 1)
Sonarax signal jammed using the PilferShush Jammer app (19000 Hz carrier, 1000 Hz drift, Speed 1)

Dov-e advert signal capture
Dov-e web video advert signal capture

Trillbit calibration signal capture
Trillbit calibration tones signal capture

Cifrasoft tranmsit tones
Cifrasoft example transmission tones

Actv8 youtube video embed nuhf signal
Actv8 youtube video embed nuhf signal

ToneTag audible signal demonstration capture
ToneTag audible signal demonstration capture

Soundpays audible signal demonstration capture
Soundpays audible signal demonstration capture

Unknown signal capture from department store
Unknown signal capture from department store.

Unknown signal capture from department store
Unknown signal capture from department store.

HTML5 NUHF SYNTH

Frequency to alphabet signal

HTML 5/javascript based audio synth coded to transmit inaudible near ultra high frequency (NUHF) tones.

Moved to it's own page here: nuhf-synth.html