Music Tips - Zik World

Music Tips

Music Tips (4)



There is No Wrong Answer in Music Composition

Writing music is one of those things you can do and never make a mistake. Some melodies are catchier than others are, and everyone will write some bad stanzas. It's all right; that's why we have revisions. Remember this while writing music: it will make you feel better and help you avoid writer's block.


Do Something Different

It's easy to get stuck in a rut and all of your songs begin to sound the same. Even if you've found a great combination of notes or a catchy beat, changing it can be good and help you grow as a composer.
An easy way to try something new is pick up an instrument you haven't played before. Sometimes you find yourself playing the same old keys or strumming the same chords. A different instrument can lead you to melodies you may not have thought of otherwise.


Practice, Practice and More Practice
There is no substitute for hard work and practice—it is the only formula that will guarantee you will become a better songwriter.



A digital audio workstation is an electronic system designed solely or primarily for recording, editing and playing back digital audio.


:Sound Card: 

A sound card or audio card is an internal computer expansion card that facilitates the input and output of audio signals to and from a computer under control of computer programs.




Audio Stream Input/Output is a computer sound card driver protocol for digital audio, providing a low-latency and high fidelity interface between a software application and a computer's sound card.



:Music Sequencers:

Steinberg Cubase & Nuendo & Wavelab
Cakewalk Sonar
Ableton Live
Apple Logic
Avid Pro Tools
FL Studio
Sony Acid & Sound Forge
Synapse Orion
Ohm Studio
PreSonus Studio One



:Audio Signal Processing:


is the intentional alteration of auditory signals, or sound, often through an audio effect or effects unit. As audio signals may be electronically represented in either digital or analog format, signal processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on the digital representation of that signal.
{Digital format}
DSP: Digital Signal Processing
VST: Virtual Studio Technology 
AU: Audio Unit (Apple) 
RTAS: Real Time AudioSuite (Digidesign)
DirectX: (Microsoft)
Advancement Via Individual Determination
LADSPA: Linux Audio Developers Simple Plugin API




An EQ or Equaliser is a filter than allows you to selectively cut or boost a certain frequency. Some EQ’s have fixed frequencies while others allow you to select these for yourself (these are known as Parametric Equalisers).
All Parametric EQ plugins will have three basic controls, a frequency selector, a Q or bandwidth selector and a Gain (this selects the amount the filter will cut or boost the desired frequency). Many also feature frequency graphs to help you visualise the sound.
At this point the best way to get a feel for how EQ can help you is to run a sound through an EQ plugin in your sequencer. One of the best techniques for identifying weak or problem frequencies is to set your gain control to a high boost and then sweep the frequency control through its spectrum. You’ll hear certain frequencies may sound pleasant when boosted or conversely some may jump out at you as being too harsh. Remember that cutting frequencies can be just as valuable as boosting!
One great technique you can use to help the low frequencies stand out in your mix is to use EQ to cut low frequencies from your lead sounds, try cutting frequencies from 60-200Hz to help create more space in your mix.




Let’s take a look at the controls you’ll find on most compressor plugins.
Threshold - this determines the volume level (in dB) that a sound must attain before the compressor takes effect. When this level is exceeded the signal is reduced (gain reduction).
Ratio - this control determines the amount of gain reduction applied to the signal once the compressor has kicked in. A Ratio setting of 1:1 would mean that no gain reduction would be applied, where as a setting of infinity:1 would mean that the signal would never rise about the Threshold level (this is known as Limiting).
Attack – this control dictates the time that the compressor takes to reduce the gain. Fast attack times will reduce the gain immediately where as slow attack times will leave the first portion of the sound untouched fading the gain reduction in. This is a classic technique used to enhance the qualities of percussive sounds.
Release – this setting will determine the length of time that it takes the compressor‘s gain to return to normal once the signal has fallen below the Threshold setting.
Some compressors have an Auto attack/release button that selects suitable settings for the incoming sound.
Hard Knee/ Soft Knee – this control dictates the character of the compression used, a Hard Knee setting will instantly apply the full amount of gain reduction to the signal once it has exceeded the Threshold. Soft Knee will allow the effect of the compressor to be more gradual.
Side Chain – this function allows you to use an external signal to control the compressor. This effect is frequently used in electronic music. For example producers will often feed the bass part from a song into a compressor whilst routing the kick drum sound through the compressors side chain input. This will have the effect of "ducking" the bass sound when the kick drum plays, producing a rhythmic pumping effect.



is the acoustic environment that surrounds a sound. Natural reverb exists everywhere. Whether the space being described is a bathroom or a gymnasium, the essential characteristics remain the same.Reverb is composed of a series of tightly-spaced echoes. The number of echoes and the way that they decay play a major role in shaping the sound that you hear. Many other factors influence the sound of a reverberant space. These include the dimensions of the actual space (length, width, and height), the construction of the space (such as whether the walls are hard or soft and whether the floor is carpeted), and diffusion (what the sound bounces off of). In addition to natural reverb, software synthesis of reverberation is also possible. Many audio card s, synthesizer s, dedicated effects processors, and digital audio applications can create reverb, simulating both natural and supernatural environments. For example, one could create the reverb for a room fifty feet long, five feet wide, with a four-foot ceiling, lined with carpet.The synthesis of reverb by a digital signal processing (DSP) algorithm usually attempts to mimic the way a real acoustic space works. The algorithm designers simulate the early reflections, the compounding of echoes, and the decay of high versus low frequencies when designing their product. Of course, the more processing power and speed available, the more complex and potentially realistic a reverb signal can be created.



is an audio effect which records an input signal to an audio storage medium, and then plays it back after a period of time.The delayed signal may either be played back multiple times, or played back into the recording again, to create the sound of a repeating, decaying echo.





to simulate the effect of reverberation in a large hall or cavern, one or several delayed signals are added to the original signal. To be perceived as echo, the delay has to be of order 35 milliseconds or above. Short of actually playing a sound in the desired environment, the effect of echo can be implemented using either digital or analog methods.





creation for unusual sound , a delayed signal is added to the original signal with a continuously variable delay (usually smaller than 10 ms).
Using a delay line creates an unlimited series of equally spaced notches and peaks.




The electronic phasing effect is created by splitting an audio signal into two paths. One path treats the signal with an all-pass filter, which preserves the amplitude of the original signal and alters the phase. The amount of change in phase depends on the frequency. When signals from the two paths are mixed, the frequencies that are out of phase will cancel each other out, creating the phaser's characteristic notches.








a delayed signal is added to the original signal with a constant delay. The delay has to be short in order not to be perceived as echo, but above 5 ms to be audible. If the delay is too short, it will destructively interfere with the un-delayed signal and create a flanging effect. Often, the delayed signals will be slightly pitch shifted to more realistically convey the effect of multiple voices.




An audio filter is a frequency dependent amplifier circuit, working in the audio frequency range, 0 Hz to beyond 20 kHz. Equalization (EQ) is a form of filtering. In the general sense, frequency ranges can be emphasized or attenuated using low-pass, high-pass, band-pass or band-stop filters. Band-pass filtering of voice can simulate the effect of a telephone because telephones use band-pass filters.





effects such as the use of a fuzz box can be used to produce distorted sounds,
such as for imitating robotic voices or to simulate distorted r
adiotelephone traffic.The most basic overdrive effect involves clipping the signal when its absolute value exceeds a certain threshold.





 :Pitch Shift:
this effect shifts a signal up or down in pitch. For example, a signal may be shifted an octave up or down. This is usually applied to the entire signal, and not to each note separately.





  :Time Stretching:

the opposite of pitch shift, that is, the process of changing the speed of an audio signal without affecting its pitch.





to change the frequency or amplitude of a carrier signal in relation to a predefined signal.
more info coming soon -->






Audio normalization is the application of a constant amount of gain to an audio recording to bring the average or peak amplitude to a target level.




emphasize harmonic frequency content on specified frequencies.




The basis of distorting a sound is to increase the harmonic content of the incoming signal. This basically means it is creating more sound and thus increasing the volume.Normally characterized on bass, yet can be creatively useful on many elements in music.



:Bit Crusher:

Bit Crushers lower the bit rate of an incoming signal, achieving a more lo-fi, ‘Gritty’ and in some cases ‘harsh’ texture.




:Transient Shaper:

A transient shaper allows you to change the ADSR volume of each hit by detecting the transient of each drum in your track and then boosting or reducing the volume over the length of your drum hit, then starting again as soon as the next hit comes through it.





There are two types of tremolo. a)rapid reiteration-of a single note, particularly used on bowed string instruments and plucked strings such as harp, where it is called bisbigliando or "whispering". 
between two notes or chords in alternation, an imitation of the preceding that is more common on keyboard instruments. Mallet instruments such as the marimba are capable of either method.a roll on any percussion instrument, whether tuned or untuned. b)variation in amplitude-using electronic effects and effects pedals which rapidly turn the volume of a signal up and down, creating a "shuddering" effect an imitation of the same by strings in which pulsations are taken in the same bow direction.




Vibrato is a musical effect consisting of a regular, pulsating change of pitch. It is used to add expression to vocal and instrumental music. Vibrato is typically characterised in terms of two factors: the amount of pitch variation and the speed with which the pitch is varied.





This effect is in essence the same as distortion. Yet saturation works at a lot softer rate, allowing you to dial in subtle degrees of distortion without introducing harshness. This also allows you to drive the sound harder as you don’t have to worry about introducing unwanted artefacts that distortion can sometimes have.




Exciter can add life and high end harmonic content to an otherwise dull signal, bringing out the harmonics and brightness needed. And all without filling a sound with too much treble.






A limiter acts in a similar way to a compressor, except that nothing can exceed its threshold. They are sometimes referred to as a ‘brick wall’ as nothing can get past. They are normally used in mastering a final track to bring up the overall volume of a track, but can be useful and creative in the mixing stage also.




:Granular Effect:

when you split your audio signal into small pieces of around 1 to 50 ms. These small pieces are called grains. Multiple grains may be layered on top of each other, and may play at different speeds, phases, volume, and frequency, among other parameters.




Is an audio processor that captures the characteristic elements of an an audio signal and then uses this characteristic signal to affect other audio signals. The technology behind the vocoder effect was initially used in attempts to synthesize speech. The effect called vocoding can be recognized on records as a talking synthesizer.





:Auditory Range:

The range of human hearing is from around 2Hz (2 cycles per second) to 20,000Hz (alias 2KHz) although with age one tends to lose acuity in the higher frequencies so for most adults the upper limit is around 10KHz. 
The lowest frequency that has a pitch-like quality is about 20Hz. 
A typical value for the extent to which an individual can distinguish pitch differences is 05-1% for frequencies between 500 and 5000Hz. (Differentiation is more difficult at low frequencies). Thus at 500Hz most individuals will be unable to tell if a note is sharp or flat by 2.5-5Hz (ie an 'allowable' pitch range for that note might be from 495Hz to 505Hz maximum.




Analog  is a technique used for the recording of analog signals which is recording methods to store signals as a continual wave in or on the media. The wave might be stored as a physical texture on a phonograph record, or a fluctuation in the field strength of a magnetic recording.



Digital audio refers to technology that records, stores, and reproduces sound by encoding an audio signal in digital form instead of analog form. Sound is passed through an analog-to-digital converter (ADC), and pulse-code modulation is typically used to encode it as a digital signal. A digital-to-analog converter performs the reverse process.


Audio Bit Depth

In digital audio using pulse code modulation (PCM), bit depth describes the number of bits of information recorded in each individual sample. Bit depth directly corresponds to the resolution of each sample in a set of digital audio data. Examples of bit depth include CD quality audio, which is recorded at 16 bits, and DVD-Audio and Blu-ray Disc which can support up to 24 bits.

A set of digital audio samples contains data that provides the necessary information to reconstruct the original signal. The audio bit depth limits the signal-to-noise ratio (SNR) of the reconstructed signal to a maximum level determined by quantization error. The bit depth has no impact on the frequency response, which is constrained by the sample rate.

Quantization noise is a model of quantization error introduced by quantization in the analog-to-digital conversion (ADC) in telecommunication systems and signal processing. It is a rounding error between the analog input voltage to the ADC and the output digitized value. The noise is non-linear and signal-dependent.

In an ideal ADC, where the quantization error is uniformly distributed between −1/2 and +1/2 least-significant bit, and where the signal has a uniform distribution covering all quantization levels, the Signal-to-quantization-noise ratio (SQNR) can be calculated from

\mathrm{SQNR} = 20 \log_{10}(2^Q) \approx 6.02 \cdot Q\ \mathrm{dB} \,\!

Where Q is the number of quantization bits. 24-bit digital audio has a theoretical maximum SNR of 144 dB, compared to 96 dB for 16-bit; however, as of 2007 digital audio converter technology is limited to a SNR of about 124 dB (21-bit) because of real-world limitations in integrated circuit design. Still, this approximately matches the performance of the human auditory system.

It is important to note that bit depth is only meaningful when applied to PCM. Non-PCM formats, such as lossy compression formats like MP3, AAC and Ogg Vorbis, do not have associated bit depths. For example, in MP3, quantization is performed on PCM samples that have been transformed into the frequency domain.

Using higher bit depths during studio recording enables greater headroom to be left on the recording. This reduces the risk of clipping without encountering quantization errors at low volumes.

Bit depths by signal-to-noise ratio
         bits - SNR - Possible integer value

4 -24.08 dB - 16
8 - 48.16 dB-  256
16 - 96.33 dB - 65,536
24 - 144.49 dB - 16,777,216
32 - 192.66 dB - 4,294,967,296

48 - 288.99 dB - 281,474,976,710,656
64  -  385.32 dB  -  9,223,372,036,854,775,807

Floating point . Many audio file formats and digital audio workstations (DAWs) now support PCM formats with samples represented by floating point numbers. Both the Microsoft WAV file format and the Apple AIFF file format support floating point PCM and major DAWs support varied floating point processing capabilities.Unlike integers, whose bit pattern is a single series of bits, a floating point number is instead composed of several smaller bit patterns whose mathematical relation forms a number. This method of representation is similar to scientific notation and expands a binary system to more closely approximate real numbers. Floating point numbers still have upper and lower bounds that are fixed but the method of representation allows increasingly smaller integer values to include an increasingly larger fractional part. The most common standard is IEEE floating point which is composed of three bit patterns: a sign bit which represents whether the number is positive or negative, an exponent and a mantissa which is raised by the exponent. The mantissa is expressed as a binary fraction in IEEE base two floating point formats.IEEE single-precision (32-bit) floating point format:

Float example.svg

For example, the 32-bit floating point bit pattern 1 01111101 00101100000000000000000 is interpreted as the following:

(-1)1 × (1 + 0.34375) × 2(125 - 127) = -1.34375 × 2-2 = -0.3359375

As a different example, the bit pattern 0 10010010 10110001010000000001000 is a larger number and shows the fraction become reduced in length:

(-1)0 × (1 + 0.004883766174316406) × 2(146 - 127) = 1.004883766174316406 × 219 = 526,848.5

Audio processing.Sometimes a small amount of random noise, called dither, is deliberately added to the signal before quantizing. Dithering eliminates the granularity of quantization error, giving very low distortion, but at the expense of a slightly raised noise floor. Measured using ITU-R 468 noise weighting, this is about 66dB below alignment level, or 84dB below digital full scale, which is somewhat lower than the microphone noise level on most recordings, and hence of no consequence .24-bit audio is sometimes used undithered, because for most audio equipment and situations the noise level of the digital converter can be louder than the required level of any dither that might be applied.For most situations the advantage given by a resolution higher than 16-bit is mainly in the ease of setting recording levels. With 16 bit audio, poorly set recording levels can result in noisy recordings. With 24 bit audio, up to 10-20 dB of extra range can be available, providing additional margin for error. Although 24 bit audio provides additional dynamic range, this is generally insufficient for all but trivial processing steps. Furthermore, the use of integer precision introduces potentially difficult to anticipate overflow and underflow errors. Consequently, most audio processing is performed after first converting to 32 bit or higher floating point precision.  Following audio processing, samples are often reduced to 16 or 24 bit precision for distribution, or encoded to non-PCM formats such as MP3 that do not have a finite bit depth.


Sample Rate

The sample rate or sampling frequency defines the number of samples per unit of time  taken from a continuous signal to make a discrete signal. For time-domain signals, the unit for sampling rate is hertz (inverse seconds, 1/s, s−1), sometimes noted as Sa/s or S/s (samples per second). The reciprocal of the sampling frequency is the sampling period or sampling interval, which is the time between samples.

Oversampling : In some cases it is desirable to have a sampling frequency more than twice the desired system bandwidth so that a steep digital filter and a less steep analog anti-aliasing filter can be used in exchange for a steep analog anti-aliasing filter. The reason for wanting a less steep analog anti-aliasing filter is that the digital filter is not subject to any component variations thus always giving the filter response (filtering function) that the designer has chosen. This process is known as oversampling.

Undersampling : Conversely, one may sample below the Nyquist rate. For a baseband signal (one that has components from 0 to the band limit), this introduces aliasing, but for a passband signal (one that does not have low frequency components), there are no low frequency signals for the aliases of high frequency signals to collide with, and thus one can sample a high frequency (but narrow bandwidth) signal at a much lower sample rate than the Nyquist rate.

In digital audio the most common sampling rates are 44.1 kHz, 48 kHz, 88.2 kHz, 96 kHz and 192 kHz. Lower sampling rates have the benefit of smaller data size and easier storage and transport. Because of the Nyquist-Shannon theorem, sampling rates higher than about 50 kHz to 60 kHz cannot supply more usable information for human listeners. Early professional audio equipment manufacturers chose sampling rates in the region of 50 kHz for this reason. 88.2 kHz and 96 kHz are often used in modern professional audio equipment, along with 44.1 kHz and 48 kHz. Higher rates such as 192 kHz are prone to ultrasonic artifacts causing audible intermodulation distortion, and inaccurate sampling caused by too much speed. The Audio Engineering Society recommends 48 kHz sample rate for most applications but gives recognition to 44.1 kHz for Compact Disc and other consumer uses, 32 kHz for transmission-related application and 96 kHz for higher bandwidth or relaxed anti-aliasing filtering.


8.000 HZ : Telephone, walkie-talkie, wireless

11.025 HZ : One quarter the sampling rate of audio CDs, used for lower-quality PCM, MPEG audio and for audio analysis of subwoofer bandpasses.

16.000 HZ : Wideband frequency extension over standard telephone narrowband 8,000 Hz. Used in most modern VoIP and VVoIP communication products.

22.050 HZ : One half the sampling rate of audio CDs; used for lower-quality PCM and MPEG audio and for audio analysis of low frequency energy. Suitable for digitizing early 20th century audio formats such as 78s

32.000 HZ : for minidv , divicam with 4 channels of audio , DAT (LP mode) ,NICAM , High-quality digital wifi mic.

44.056 HZ : Used by digital audio locked to NTSC color video signals (245 lines by 3 samples by 59.94 fields per second = 29.97 frames per second).

44.100 HZ : Audio cd , most commonly used with MPEG-1 audio (VCD,MP3,A.O)Much pro audio gear uses (or is able to select) 44.1 kHz sampling, including mixers, EQs, compressors, reverb, crossovers, recording devices and CD-quality encrypted wireless microphones.

47.250 HZ : world's first commercial PCM sound recorder by Nippon Columbia

48.000 HZ : The standard audio sampling rate used by professional digital video equipment. used for sound with consumer video formats like DV, digital TV, DVD, and films. Much professional audio gear uses (or is able to select) 48 kHz sampling, including mixers, EQs, compressors, reverb, crossovers and recording devices such as DAT.

50.000 HZ : First commercial digital audio recorders from the late 70s from 3M and Soundstream.

50.400 HZ : Sampling rate used by the Mitsubishi X-80 digital audio recorder.

88.200 HZ : Sampling rate used by some professional recording equipment when the destination is CD (multiples of 44,100 Hz).

96.000 HZ : DVD-Audio, some LPCM DVD tracks, BD-ROM (Blu-ray Disc) audio tracks, HD DVD (High-Definition DVD) audio tracks.

176.400 HZ : Sampling rate used by HDCD recorders and other professional applications for CD production.

192.00 HZ : BD-ROM (Blu-ray Disc) audio tracks, and HD DVD (High-Definition DVD) audio tracks , 4 times the 48khz

352,800 HZ : Digital eXtreme Definition, used for recording and editing Super Audio CDs, as 1-bit DSD is not suited for editing. Eight times the frequency of 44.1 kHz.

2.822.400 HZ : SACD, 1-bit delta-sigma modulation process known as Direct Stream Digital, co-developed by Sony and Philips.

5.644.800 HZ : Double-Rate DSD, 1-bit Direct Stream Digital at 2x the rate of the SACD. Used in some professional DSD recorders.





In digital music processing technology, quantization is the process of transforming performed musical notes, which may have some imprecision due to expressive performance, to an underlying musical representation that eliminates this imprecision. The process results in notes being set on beats and on exact fractions of beats. 
Normal quantization: The 1/1-Note, 1/2-Note, 1/4-Note, 1/8-Note, 1/16-Note, 1/32-Note, 1/64-Note and 1/128-note settings quantize the MIDI or Audio region to the equivalent note value. Triplet quantization: The 1/3-Note, 1/6-Note, 1/12-Note, 1/24-Note, 1/48-Note, and 1/96-Note settings quantize the MIDI region to triplet note values. A 1/6 note is equivalent to a quarter triplet, 1/12 note to an eighth triplet, 1/24 note to a sixteenth triplet, and 1/48 note to a thirty-second triplet. Quantization off: The off (3840) setting plays the notes at the finest possible timing resolution:  1/3840 note, which is unquantized playback, in practical terms.





The quietest sounds that can be heard have a power (measured in Watts) of 10 to the -12 W/m2, whilst the loudest that can be withstood have a power of 1 W/m2. The range is therefore in the order of 10 to the 12, or one million million times.One decibel is a leap by a factor of 10, so that 0Db is the quietest noise, and 120Db is the loudest.






Beat frequencies are produced when two different sounds are produced which are very close to each other in frequency. In such a situation the crests and troughs of each wave are generally slightly out of phase. But because the two notes have differing frequencies, after a certain repeating interval of time the crests of one wave will be aligned with the crests of the other, when a pulse or beat appears to the listener.
Research has found that any two notes of different frequencies tend to sound good together (ie consonant) if there is an absence of beat frequencies between 8 and 50 Hz produced. Beat frequencies of 2-8Hz have been found to be pleasing, while beat frequencies above that level are generally though to be unpleasant.
The beat frequency produced by any two notes is found by subtracting the value of the higher from the lower, ie Fbeat = Fhigher - Flower. Thus beat frequencies are a subset of difference tones -the 'beat' sensation occurring when the beat frequency value is low, say from 0.5Hz (1 beat every two seconds) to say 20Hz (the lowest frequency that has a pitch-like quality) J.Askill, in 'The Physics of Musical Sounds) says 'in general beat frequencies of 2-8Hz are considered pleasing, whereas if the beat frequency is above 15-20Hz, an unpleasant or dissonant effect is produced'. Personally I am curious about the range in the middle, say 8-12Hz which is also the frequency of 'alpha' brain-waves.




The scale used extensively in the West has 13 notes from octave to octave and 12 intervals. In order for a scale to 'work' there should be:
* a minimum of dissonance when different notes across the whole range of pitches are sounded together
* an effective mapping of the harmonics of low notes onto higher notes, and an effective mapping onto the harmonics of higher notes
* the possibility of key modulation which does not result in further frequency mismatches.
The scale which has been widely adopted to fulfil these criteria is based on mathematics, such that the ratio of the frequency of any note to the frequency of the note a semitone above is constant. This is particularly useful in the extent to which it allows key modulation. However, this 'equitempered' scale is a compromise solution, because the frequency ratios of all intervals except the octave differ slightly from the 'perfect' intervals that the human ear really expects to hear.
The 'exact' interval of a fifth, for instance, is found by multiplying the frequency of the fundamental by 3/2, the fourth is found by multiplying the fundamental frequency by 4/3, and the major third interval is found by multiplying the fundamental frequency by 5/4. (Other intervals involve slightly less obvious fractions).
The problem with a scale built on fractional values like this, however, is that the increments from note to note are not constant (eg 5/4 - 4/3 does not equal 4/3 - 3/2) which creates difficulties when the required key for a piece is different to that of the fundamental from which the scale is constructed. For example, if we move up an octave from C by adding a fifth, and then adding a fourth, then the resulting high C will have a different frequency to that arrived at if our key is F, and we try to arrive at the same high C by adding a major third and then a minor third to that fundamental F. So in the equitempered scale all semitone increments have been 'tempered' such that they are always a little flat, or a little sharp.
The constant value on which this scale is based is 1.0594630915, such that if we call this value S, then the semitone above a fundamental note is found by multiplying the frequency of the fundamental by S to the power of 1.
The second above the fundamental is found by multiplying it's frequency by S to the power of 2, and so on until the octave above the fundamental is found by multiplying it's frequency by S to the power of 12. The value of the constant S is the 12th root of 2, since in order to find the twelve equal divisions between two notes an octave apart, where the frequency of the octave is twice that of the fundamental, the 12th root of 2 is the value we are looking for.
Other divisions of the octave have been proposed, such as a 19-step octave, and a 53-step octave. The maths for these 'works' although these 'scales' may be harder to use effectively. The maths for the 53-division scale is particularly elegant in fact, and closer to a 'perfect' musical scale than the 13-note scale which we currently use. (In that case the constant value for each successive interval is found by using the 53rd root of 2, ie 1.013164143).
The 'exact' scale, built on the 'perfect' intervals that the ear expects to hear, has much to recommend it if one key is kept to. This scale, however, has fifteen intervals and fourteen notes, since in the first octave there are all the notes of the equitempered scale (at slightly different frequencies) but there is also both a 'major whole tone' and a 'minor whole tone', and both an 'augmented fourth' and a 'diminished fifth'. (In the second octave there is both an 'augmented octave' and a 'dimished ninth' and also both an'augmented eleventh' and a 'diminished twelfth'). Thus successive octaves above the fundamental differ from each other in the way that they are put together. Furthermore, however, when we look at the extent to which the frequencies of harmonics of exact-scale notes match notes higher up in the exact scale, we see that we can list the intervals octave, fifth, fourth, major third, major sixth, minor third, minor sixth in terms of increasing dissonance, so that in the case of the minor sixth, if we look at all harmonics up to the twelfth, only one 'matches'.
The mathematical elegance of the 53-division scale should make it a more appropriate scale for dealing both with key modulation, and a preoccupation with harmonics.
The 53-interval scale uses eneven 'chunks' of these 53rd-of-an-octave division to create the notes of the diatonic scale. The size of each incremental chunk is as follows:
C->D: 9
D->E: 8
E->F: 5
F->G: 9
G->A: 8
A->B: 9
B->C: 5
and with the semitones:
C->D: 4+5=9
D->E: 4+4=8
E->F: 5
F->G: 4+5=9
G->A: 4+4=8
A->B: 4+5=9
B->C: 5

The Physiological Effects Of Sound

The accepted view of researchers into the physiological effects of sound is that 'no non-auditory [ie physiological] effects are noted until the loudness exceeds approximately 120Db. 120Db is VERY loud, in fact it's at the limit of what can be heard before physical damage is caused to the ear. However, research has been carried out into the extent to which vibration at frequencies within the audible range can be transmitted through the body. Different parts of the body have differing optimum resonance frequencies. Some of these are: (The first one is conjecture, the rest are documented).
The eyeball: 5Hz. (Low frequencies such as this are known as 'infrasound').
The jaw: 6-8Hz.
The chest, nose and throat cavities: somewhere in the region of 10-75Hz.
The whole skull: 200Hz.
The front and the back of the skull: 800Hz, where the front and back parts vibrate in opposite phase.
The front, left side, back and right sides of the skull: 1600Hz. At this frequency each of the four sides vibrates independently of the others. The exact frequencies for skull resonance vary from individual to individual due to variations in skull size, however; in fact all values given here can only be approximate averages.
The bones of the middle ear (the 'ossicular' system) resonate at 2000Hz.
The air within the middle ear resonates at 2500Hz.
The resonant frequency of the outer ear is 3150Hz. (Sounds in the surrounding frequency range, from about 3000Hz to 3500Hz are amplified several times by this effect).
There is some evidence to suggest that the middle ear when exposed to ultra-sound (ie sound above the threshold of audible frequencies) creates subharmonics within the audible range.




:Cubase Tips:

Starting off with a new piece of software can often seem a daunting process, so let's take a look through some basic processes in Cubase and kick-start your knowledge database...
To open a project:
1. File - new project - empty - create a folder and name it "track 1" for example
2. Save - cntrl - s then name the project within the folder you have created
To import audio samples:
1. File - import - audio file
To set up virtual instrument:
1. In blue column to left right click- add MIDI track
2. Choose instrument... F11 - choose synth of your choice
3. Go back to your MIDI track -make sure the "show inspector" tab is highlighted blue (top left)
4. Go to "out" and select the synth you have added
Arranging content:
1. Choose your sample/ MIDI block eg. kick drum
2. Click- hold -alt - drag - drop
3. Change quantize to suit groove (top right) >I< = snap to grid
4. When snap to grid it will follow your quantize options to the right e.g. bar, beat,. use quantize - alter 1/16ths etc...
5. I would recommend grid snap on - use quantize - 1/16
Navigation bar:
1. Press F2 to show or hide it
2. Contains useful features such as Auto Q (quantizes midi going in eg. on keyboard)
3. Stop play, skip etc.... just like discman!
4. Click = click track/ metronome
5. Tempo - change to fixed
6. Change BPM to what you want
1. Press F3 to open & close mixer
2. Press the small e icon to open up effects
Effects window (small e button):
1. Left to right: inserts -> eq -> sends
2. Inserts... think of them as guitar fx pedals such as distortion
3. EQ is to shape the sound... try presets (top black toolbar)
4. Sends are for ambient effects such as delay or reverb, or buses for example if you wanted to send percussion to a group drum channel
Setting up ambient sends:
1. Right click in blue column and select "add fx track channel"
2. Select stereo and the effect you want such as reverb
3. On the effect box for example reverb... choose a preset for example "hall"
4. Turn the mix up to 100%
5. Go to mixer (F3) & sound/instrument you want reverb on and press small E
6. To the right hand side you have your sends.... select your new effect (reverb for example)
7. Turn it on and send the signal
1. For audio... add an insert/eq or send that you want to alter for example a filter
2. Go to the small + at the bottom left of the audio cell and click it
3. Select the parameter that you want to automate then press the R button. sometimes you may need to press "more" to find the one you want
4. The line is now activated and draw in your changes
5. For synths the same applies however instead of pressing the + on the midi track you have to find the actual synth
6. This can be found in "vst instruments tab"
To save a song:
1. Press cntrl A to highlight tune
2. Press P to loop whole track
3. File - export - audio mixdown
4. Choose file type & make sure its stereo interleaved (channel tab)
EQ & Level tips:
1. To make sure you don't distort... when you start a project put your loudest signal half way on the mixer
2. EQ use common sense for example synths is a mid range frequency, cymbals are high, bass is low.... so either trim or enhance it in these areas
3. A main rule for EQ is to take away bass where it isn't needed - this will help keep your mix clear.
Keyboard shortcuts (speed up your work-flow!):



::How to make a kick drum::

:With Sound Forge: 

Make a new file 
Make a sine wave -Select tools/synthesis/simple 
Bend the pitch over time Select Effects/pitch/bend
Do it again and again if you like, for psytrance the sharp sound comes from 3 pitch bends of 24 semitones for a total of a 6 octave pitch sweep range 12 semitones per octave 
Save your kick

Another way to make a kick drum is Bazzism. 


:Basslines VST Suggestion:


Alien 303 - Linplug Cronox 2 & 3 - Vanguard
Steinberg VB-1 - Rob Papen Predator - Trillian
Sylenth 1 - Karma fx - Muon Tau Pro



*The most Hardware Synths can create Bass lines*

:Equalizer VST Suggestion:
Waves SSL - PSP sQuad - SPL eq Rangers
URS N4 - Fabfilter Pro-Q - DMG Audio EQuality
Waves PuigTec EQP-1A - Sonnox Oxford EQ - Voxengo GlissEQ




:Compressor VST Suggestion:

PSP Vintage Warmer 2 - Pure Compressor II - C1 Waves
Classic Compressors Waves - SSi Pro Compressor - Tube-Tech CL 1B 
IK Multimedia Model 670 - Punch Evolved - Alchemist



Have Fun

You first started writing music because you love music. If you're not enjoying composing music, then do something else for a while. People tend to do what they love best.