How I broke down the sound waves in my sound proofing and STC app available on google play


In order to build this app I needed to understand how loud a sound was as well as which frequency I was hearing. Truthfully when I started this app I underestimated how difficult this would be.


Building this app requires a few things;


  1. You need to sample the pressure of the airwaves hitting the microphone very quickly in order to build a sine wave. I found a great website with some images that included the following and this really gave me the basis of my understanding. Check out this guys website for more info here


In order to accomplish that I built an android class called audio control. I built an interface so that fragments could get an instance of audio control and make sure to call all the methods needed in androids lifecycle.

Here are the fields at the top of the class;

<pre>//   my inner classes
private PlayAudio playAudio;
private RecordAudio recordAudio;
private TestTone testTone;

private AudioRecord audioRecord;
private AudioTrack audioTrack;

private RealDoubleFFT transformer;
// TODO: 4/18/2018 buffer size to 4096 so we narrow our frequency range down a bit?
private int SAMPLE_RATE = 11025;  //chnageable. lowest is 11025 according to newventuresoftware we have a gaurantee capture of 44100/ im dubious based on intial testing of 100 to 20k hz
private int blockSize = 256; // keep here number of component frequency samples that our transform object will output is that 44100 /2 /256 = 86.132??
//but how because you only give it the raw input data
private int halfBlock = blockSize/2;
private int hzSpec;
private int reqBars =  0; //this reworks at 116? bars total
//int reqBars = 30;</pre>


Down below I have an inner class which extends Async Class

private class RecordAudio extends AsyncTask<File, double[], Boolean>
    String TAG = "RECORD AUDIO";

    private static final int AUDIO_SOURCE = MediaRecorder.AudioSource.MIC;
    private static final int CHANNEL_MASK = AudioFormat.CHANNEL_IN_MONO;
    private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT;
    long startTime =0;
    long endTime = 0;
    FileOutputStream waveOut;

    protected Boolean doInBackground(File... files) {

        try {

            dbAverage = new int[smooth];
            recordedHighestAverage = 0;
            int minBufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_MASK, ENCODING);
            audioRecord = new AudioRecord(AUDIO_SOURCE, SAMPLE_RATE, CHANNEL_MASK, ENCODING, minBufferSize);

            if (files[0] != null) {
                waveOut = new FileOutputStream(files[0]);
                writeWavHeader(waveOut, CHANNEL_MASK, SAMPLE_RATE, ENCODING);
            } else {
                waveOut = null;

            int bufferReadData;
            byte[] buffer1 = new byte[blockSize];
            long total = 0;

            try {
                startTime = SystemClock.elapsedRealtime();

            } catch (IllegalStateException e) {

                Log.e(TAG, " Records doInBackground: " + e.toString());

            while (running) {
                //Log.i(TAG, "while");

                bufferReadData =, 0, blockSize);  //we are requesting 256 byte obj and android is sending us 16bit so each byte hold half of a 16 bit!!

                publishProgress( createFFT(bufferReadData, buffer1));

                if (files[0] != null) {

                    createWavFile(total, bufferReadData, buffer1);


        } catch (IOException e) {
            Log.e(TAG, "Records doInBackground: " + e.toString(), e);
        } finally {

            Log.i(TAG, "Records doInBackground: calld from 2nd");

            endTime = SystemClock.elapsedRealtime();


        if (waveOut != null) {

            try {

            } catch (IOException e) {
                Log.e(TAG, "doInBackground: ", e);



Notice above where I publish progress to my FFT method. The above code merely show you how to sample the sounds from the air and get them into a byte array for more processing. Next we need to do some data manipulation and get the Fourier Transform and Decibel information.

I found a great website that explains that a sine wave is merely a circle with time expressed as well. Great presentation on their part and although I don’t completely feel I have internalized the lesson it has definitely improved my understanding.


I ended up downloading the FFT pack that is free and open source on the internet. You can download it by searching   ->    ca/uol/aig/fftpack

Add it as a library and then use the FFT as follows:

Initialize at top of class above all your methods.

<pre>private RealDoubleFFT transformer;</pre>


Here is the method called by our background thread gathering the audio data. As you can see I needed both bytes and short in this software and stumbled through the process of making them as seen in my little sketch below which was my reference point.

Notice below the trasnformer.ft() method is doing all the FFT work. Everything before that is just cramming data into a 16 bit configuration. Or getting me my decibel value.


protected double[] createFFT(int bufferReadData, byte[] buffer1)
    double[] toTransform = new double[blockSize/2];
    double sum = 0;
    int block = halfBlock;
    double REF = 0.00002;
    //ByteBuffer.wrap(buffer1).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);   //this hopefully creates the buffer1 array

    // xx is correct bits. Here we load byte from buffer lilendian to big to feed into the newBuff
    //  xx xx xx xx 00 00 00 00
    //  00 00 00 00 xx xx xx xx
    //  xx xx xx xx xx xx xx xx
    // 0xFF added to fix a left padding problem??

    short newBuff, buffLil, buffBig, count = 0;

    for (int i = 0; i < blockSize/2 && i < bufferReadData/2; i++) //after 128 runs with a double increment it is pulling what data?

        buffLil = buffer1[count];
        buffBig = buffer1[count + 1];
        newBuff = (short) ( buffBig <<8 | buffLil & 0xFF);
        //Log.i(TAG, "createFFT: short " + String.valueOf(newBuff));
        count ++;

        /* testing new stuff here */
        if (newBuff > 0) {

            sum += Math.abs(newBuff);


        toTransform[i] = (double) newBuff / 32768.0;             //This takes the short and divides by total short value to give a decimal double value between -1.0 to 1.0 for input into fft


    double x = sum/block;
    //Log.i(TAG, "createFFT: average amplitude " + String.valueOf(x));

    double db = 0;

    if (x != 0) {

        double pressure = x/51805.5336;

        db = (20 * Math.log10(pressure/REF));

        if (db > 0){

            //Log.i(TAG, "createFFT: " + String.valueOf(db));
            int d = (int) db;
            //Log.i(TAG, "createFFT: db" + String.valueOf(d));
            averageDecibelForView = average(d);






You may also notice that I am calculating my decibels based on the guesstimated microphone pressure values someone provided me. You may also noticed That I push those into the average() method.

This fft was running very fast despite the math required and I was only refreshing the screen about 30 times per second so not all the data needed to hit the screen and surely the user didnt need to see the decibel amplitude 30 times per second.

Below you can see the result. I will get into drawing the display on my next post but it shows the current decibels top left and moving bar chart below.


Fast Fourier transform FFT android




The post Getting Fast Fourier Transform data on Android appeared first on SignalHillTechnology.

Powered by WPeMatico