Audio Watermarking Based on Statistical Feature in Wavelet Domain

Wei Li
Department of Computer Science and Engineering, University of Fudan
Shanghai, P. R. China, 200433
Xiangyang Xue
Department of Computer Science and Engineering, University of Fudan
Shanghai, P. R. China, 200433


In this paper, we propose a novel synchronization invariant audio watermarking scheme based on the statistical feature manipulation in wavelet domain. The experimental results demonstrate that this algorithm is robust to MP3 compression, low pass filtering, equalization, echo addition, resampling, noise addition, pitch shifting, random cropping, and besides, it also shows limited resistance to time-scale modifications. The watermarked audio has very high perceptual quality and is indistinguishable from the original signal. A blind watermark detection technique without resorting to the original signal and the original watermark is developed to extract the embedded watermark image under various types of attacks. In order to ensure the security of watermark. a random chaotic sequence is employed in the process of embedding and detection.


Digital audio watermarking, Discrete wavelet transform, Statistical feature, Random cropping, Chaos


In this research, we propose a novel synchronization invariant audio watermarking scheme based on the statistical feature in wavelet domain. Special attention is paid to the synchronization attack caused by casual audio editing or malicious random cropping, which is a low-cost yet effective attack to most existing watermarking algorithms based on classical spread-spectrum technique. Although some audio watermarking methods have been developed [1,2], most of them are vulnerable to random cropping such as jittering, and very few literatures have performed and published sufficient experiments against this malicious attack as shown in this paper.

In audio analysis and classification, the extracted wavelet coefficients provide a compact representation that shows the energy distribution of the signal in time and frequency domain. In order to further reduce the dimensionality of the extracted feature vectors, statistics over a set of wavelet coefficients can also be used to represent the statistical characteristics of the texture or the music surface of the audio piece[3].

In this paper, for the convenience of watermark embedding, we adopt the mean of the coefficients value rather than the mean of the absolute coefficients value at the coarsest approximation subband as the statistical feature. Because these statistical features are calculated from the wavelet coefficients at the coarsest approximation subband, which represents the perceptually most significant low frequency components of the audio signal, they are supposed to be relatively stable under common signal processing such as MP3 compression, low-pass filtering, equalization, noise addition, echo addition, resampling etc. Moreover, due to the high relevance between adjacent audio samples or small blocks, random cropping of a small clip of audio will not change this statistical feature greatly, although individual coefficient may experience a big change. In this way, the statistical feature can also be supposed to be invariant to little random cropping in time domain. Therefore, this statistical feature of the wavelet coefficients at the coarsest approximation subband serves as a good candidate for watermark embedding.

2. Algorithm Description and Experimental Results

2.1 Embedding Algorithm

(1). The input audio signal is first segmented into overlapped frames. Given the sampling frequency of 44100 Hz, the frames are of 2048 samples each, with 75% (1536 samples) overlap between every two adjacent frames. Each frame is then hamming-windowed to minimize the Gibbs effects. Note that the frame size is a trade-off between perceptual transparency (small frame sizes) and detection reliability (large frame sizes), the experimental results demonstrate that a good compromise in this trade-off can be reached with the frame size of 2048-samples and 75% overlap between every two adjacent frames.

(2). For each audio frame, three-level wavelet decomposition is performed with the 'db4' or 'haar' wavelet basis, and then the mean of all the wavelet coefficients at the coarsest approximation subband (i.e. ca3) is calculated. Next, the mean is removed from all coefficients at the ca3 subband to facilitate the embedding process.

(3). The watermark data, which is a 32 by 32 binary logo image shown in Figure.1 in our experiment, is transformed into one dimensional sequence of ones and zeros and then encrypted using a random chaotic sequence generated according to the literature [4]. Most other existing watermarking algorithms use pseudorandom sequence as the watermark, which is not so intuitionistic as the image and the correlation detection highly depends on the selection of threshold. Next, each bit of watermark data is embedded into the corresponding audio frame in the following way: if w(i)=1, all the ca3 level wavelet coefficients in the i-th frame are added by a, else if w(i)=0, they are subtracted by a, where a is a small number in the same order of magnitude with the mean of each frame, and is adjusted so as not to introduce any audible artifacts into the watermarked audio.

(4). Inverse discrete wavelet transformation (IDWT) is applied to the modified wavelet coefficients in each frame to transform them back to the form in time domain.

(5). Steps 2 through 4 are repeated until all the watermark bits are embedded. Finally, all the modified frames are merged together to form the whole watermarked audio signal in time domain.


Figure 1. The original watermark: a binary logo image

2.2 Blind Watermark Detection

The detection algorithm is straightforward and blind, without resorting to the original audio signal. For each segmented frame, if the mean of the wavelet coefficients at the coarsest approximation subband is larger than zero, a bit of '1' is extracted, while if the mean is lower than zero, a bit of '0' is extracted. This process is repeated until all watermark bits are detected. Finally, all the detected bits are decrypted and rearranged to form the extracted watermark image.

2.3 Results And Discussion

The algorithm was tested over a set of audio signals including pop, saxophone, rock, piano, electronic organ, guitar, and violin. Each music piece has a duration of 15 seconds and is mono, 16 bits/sample, with sampling rate 44.1kHz. The waveform of the original and watermarked piano music along with their difference is as follows.

Figure 2. The original and watermarked piano waveform and the difference between them

2.4 Robustness Test

The test conditions and results are listed in Table 1. Cropping some samples randomly will produce a disastrous synchronization problem for most time-domain or spread spectrum based watermarking methods. However, our approach is rather insensitive to the synchronization structure due to the high relevance between adjacent audio samples or small blocks. Even if several thousands of samples are cropped at different positions randomly, it will not seriously affect the mean of the wavelet coefficients value at the coarsest approximation subband, for example, make the sign of the mean value changed, and the binary watermark image can still be extracted and identified. Pitch-invariant time scale modifications are also tested over the watermarked audio. However, this approach has only limited resistance up to +2% or -2% against this type of attacks. If repetition coding is used, the result will be better.


This work was supported in part by NSF of China under contract number 60003017, China 863 Projects under contract number 2001AA114120 and 2002AA103065, Local Government R&D Funding under contract number 01QD14013 and 015115044.


  1. T. Muntean, E. Grivel, I. Nafornita, and M. Najim. Audio Digital Watermarking Based on Hybrid Spread Spectrum", IEEE Wedel Music, 2002.
  2. D. Kirovski and H. S. Malvar. Robust Spread-Spectrum Audio Watermarking. IEEE International     Conference on Acoustics, Speech, and Signal Processing, pp. 1345-1348, 2001.
  3. George Tzanetakis, Georg Essl, and Perry Cook. Audio Analysis using the Discrete Wavelet Transform. Int. Conf. Acoustics and Music: Theory and Applications, 2001.
  4. Michael Peter Kennedy. Digital communication using chaos. Signal Processing 80, pp. 1307-1320, 2000.

    Table 1.  BER, Similarity, and Extracted Watermark for Piano    


    Type of attack

    BER (%)





    MP3 (wav->mp3->wav, 64kbps, 11:1)

    5.73 %



    Echo (delay: 100ms, decay: 40%)




    Noise (clearly audible)




    Equalization ( 'BaseBoost' of GoldWave)




    Lowpass (cutoff frequency: 4 kHz)




    Resample (44100->22050->44100)Hz




    Crop1 (crop 500 samples at 5 random positions)




    Crop2 (crop 1000 samples at 10 random positions)




    Crop3 (crop 5000 samples at 10 random positions)




    Crop4 (crop 10000 samples at 10 random positions)




    Jittering1 (crop one sample every 100 samples)




    Jittering2 (crop one sample every 500 samples)




    Time Scale Modification (pitch reserved, +2%)




    Time Scale Modification (pitch reserved, -2%)




    Pitch Shift (tempo reserved +10%)




    Pitch Shift (tempo reserved -10%)