Categories
Asia Noise News Environment Home Industrial

Impact of Soundscape in Perception

Previously, we have discussed how the human auditory system works and recognizes the sound direction. Now, we will discuss how sound is perceived through our mind. In acoustics, the sound processing into the human auditory system is divided into 2 different mechanisms, namely hearing and listening. Hearing is the process of the mechanism of sound wave propagation into the human auditory system due to the sensitivity of the human auditory system to the vibration of sound waves with a certain frequency and intensity. While listening is a process of hearing along with the interpretation of information about the environment of a place based on the details contained in the vibration of sound waves that are heard.

Interpretation of sound information in the listening process is the vibrations of sound waves that are heard by humans. That not only represents the source of the sound but also contains information about the environment in which the sound is heard due to the physical mechanism that occurs when the sound wave propagates. Listening is considered a complex mechanism because it involves multi-level attention and higher cognitive functions. There are three levels in listening that are used to explain the complexity of listening namely listening-in-search, listening-in-readiness, and background listening.

Listening then forms us in an interpretation and perception in an environment based on its acoustic conditions. For example, if we close our eyes and we are given a stimulus in the form of the sound of water, squeaking, and the sound of wind with a certain level of sound pressure (SPL) we can interpret this as a feeling of being in a park. Then if the sound is added to the vehicle’s sound stimulus with a sufficiently audible sound pressure level, this might disturb the atmosphere of the park, and we feel uncomfortable. The action and interaction of natural factors and / or human factors acoustically in a place is called soundscape. This is because the sound in the environment does not only focus on a person, but also how one interacts with the sound and how one’s attention to the sound that arises.

Simple soundscape involves the type of sound source, location related to activities that occur in the related environment, environmental conditions and various subjective things that shape one’s perception and interpretation. This relates to the definition of soundscape in building one’s perception where it is also influenced by their socio-cultural and also the soundscape approach is seen from various disciplines.The soundscape process can be seen in the process diagram in Figure 1.

The analysis of soundscape can produce information for the basis for taking action in the form of sound management, which is to sort out what sounds should be heard and what sounds should be covered with other sounds (masking noise), by directing the attention of visitors to certain sounds that are in line with expectations they are based on the function of the related place.

Written by:

Adetia Alfadenata

Acoustic Engineer

Geonoise Indonesia

support.id@geonoise.asia

References :                                                                     

1. B. Truax, Acoustic Communication. Ablex Publishi, 1984

2. A. Ozcevik and Z. Y. Can, “A Field Study on The Subjective Evaluation of Soundscape,” in Acoustics 2012, 2012, no. April, pp. 2121–2126.

3. F. Aletta and J. Kang, “Soundscape descriptors and a conceptual framework for developing predictive soundscape models,” no. October 2017, 2016.

The British Standards Institution, “BS ISO 12913-1:2014 – Acoustics — Soundscape Part 1 : Definition and conceptual framework,” ISO, 2014.

5. D. Botteldooren, C. Lavandier, and A. Preis, “Understanding urban and natural soundscapes,” in Forum Acusticum 2011, 2011, vol. 1, no. c, pp. 2047–2052.

Categories
Asia Noise News Building Accoustics Environment Home Industrial

Soundscape Under Covid-19

Many around the world are experiencing life with very low noise levels due to restrictions as we are confined to our home and there is a decrease in the industrial, transportation and leisure activity. This provides a wonderful opportunity to quantify and record for the future the lower noise levels of our soundscapes. With the reduction in shipping there is also a change in the underwater soundscapes.

Nowadays there are a high number of noise monitoring systems (noise monitoring terminals, city wide systems, underwater systems etc.) installed all over the world which will capture this information for the future. However, there are many acousticians working from home with access to a sound level meter that can be used to capture the soundscape from their balcony or from their garden and compare the before and after the restrictions.

The IYS 2020 committee has provided a central contact between a number around the world who were thinking similarly that there would be some benefit in coordination and a little standardization in the capture of the data. Marçal Serra from CESVA has taken a lead to set up a LinkedIn group COVID-19 Noise Reduction (at www.linkedin.com/groups/13844820/) and with hashtag #COVID19NoiseReduction for any posts.

The following is a general structure for those who wish to participate and share their data in the future. But do not break your confinement to report this data!

  • Place: Country and city (e.g., Spain village near Barcelona)
  • Primary noise source: (e.g., Traffic noise: note number of lanes per direction or Social noise: note if café/bar/restaurant/sporting)
  • Noise measuring system: The noise measuring system used to measure LduringLbefore, and Lafter
  • Noise level during COVID-19 confinement: Lduring, expressed as a weighted overall level (preferably LAeq,1 hour), spectrum or psychoacoustic metrics as Loudness. It could also be reported as an image of the noise time history or a weekly color map and/or compiled into a report/article/conference paper with the measurement details and the comparison data
  • Noise level before & after COVID-19 confinement: Lbefore & Lafter, expressed in the same way as Lduring and over the same time period.
Categories
Asia Noise News

Human Hearing

Binaural hearing allows for localizing the source of the sound, suppressing noise, example to better understand speech. To localize sound there is an important aspect of auditory perception that allows us to adjust to the room, namely spatial hearing. There are two processes in localizing sounds in humans, monaural cues and different cues.

  • Monaural Cues

Monaural cues are how each ear translates the captured sound signal. Monaural cues are the result of a convolution of sound sources with head-related transfer function (HRTF) impulses. Head-Related Transfer Function (HRTFs) is a form of transformation of sound wave propagation from the source to the ear or Head-Related Impulse Response (HRIR). HRTF is also defined as a form of modification of a sound from a certain direction that reaches the ear. This transformation involves diffraction and reflection from the anatomy of the ear. HRTF also depends on the location of the sound source relative to the listener so that it can determine the sound source.

  • Difference Cues

Difference cues are how the difference between two ears translates to sound signals. These differences cues contain information on International Time Difference (ITD) and Interaural Level Difference (ILD). ITD is the difference in the arrival time of the left and right ear sound waves while ILD is the difference in pressure level between the left and right ears. Based on Duplex Theory, ITD values ​​are used for localizing sounds at low frequencies, which is below 1.5 kHz while ILD is used for localizing sounds at high frequencies, which is above 1.5 kHz. Environmental sounds are in the range of low frequency and high frequency so that the human auditory system uses ITD and ILD.

The basic principles in ITD are illustrated in Figure 1

Figure 1 Interaural Time Difference (ITD) principal

When the sound source is sound waves with low frequency, the propagation of sound waves will reach both ears without decreasing the sound pressure level. This is because the wavelength of sound is smaller than the dimensions of the head. However, there is a time difference received between the two ears. Therefore, sound waves at low frequencies are related to ITD.

The basic principles of ILD are illustrated in Figure 2. The ILD value is influenced by the size of the head and for sources that are very close to the head. When the sound source is in the high-frequency range where the wavelength of the sound is smaller than the dimensions of the head, the sound will reach the ears closer to the sound source. When will reach the other ear, the sound will be held up or there is a failure of propagation of sound waves for a while, this phenomenon is called an acoustic shadow. The sound that finally reaches the other ear will experience a decrease in the level of sound pressure caused by the phenomenon of acoustic shadow.

Figure 2. Acoustic shadow phenomenon at high frequency

Written by:

Adetia Alfadenata

Acoustic Engineer

Geonoise Indonesia

support.id@geonoise.asia

Reference

  1. T. Potisk, “Head-Related Transfer Function,” 2015.
  2. X. Zhong and B. Xie, “Head-Related Transfer Functions and Virtual Auditory Display,” Soundscape Semiot. – Localis. Categ., 2014
  3. W. György, “HRTFs in Human Localization : Measurement , Spectral Evaluation and Practical Use in Virtual Audio Environment,” 2002.
  4. K. Carlsson, “Objective Localisation Measures in Ambisonic Surround- sound,” 2004.
Categories
Asia Noise News

Ultrasound Selectively Damages Cancer Cells When Tuned to Correct Frequencies

Doctors have used focused ultrasound to destroy tumors in the body without invasive surgery for some time. However, the therapeutic ultrasound used in clinics today indiscriminately damages cancer and healthy cells alike.

Most forms of ultrasound-based therapies either use high-intensity beams to heat and destroy cells or special contrast agents that are injected prior to ultrasound, which can shatter nearby cells. Heat can harm healthy cells as well as cancer cells, and contrast agents only work for a minority of tumors.

Researchers at the California Institute of Technology and City of Hope Beckman Research Institute have developed a low-intensity ultrasound approach that exploits the unique physical and structural properties of tumor cells to target them and provide a more selective, safer option. By scaling down the intensity and carefully tuning the frequency to match the target cells, the group was able to break apart several types of cancer cells without harming healthy blood cells.Their findings, reported in Applied Physics Letters, from AIP Publishing, are a new step in the emerging field called oncotripsy, the singling out and killing of cancer cells based on their physical properties.

Targeted pulsed ultrasound takes advantage of the unique mechanical properties of cancer cells to destroy them while sparing healthy cells.

“This project shows that ultrasound can be used to target cancer cells based on their mechanical properties,” said David Mittelstein, lead author on the paper. “This is an exciting proof of concept for a new kind of cancer therapy that doesn’t require the cancer to have unique molecular markers or to be located separately from healthy cells to be targeted.”

A solid mechanics lab at Caltech first developed the theory of oncotripsy, based on the idea that cells are vulnerable to ultrasound at specific frequencies — like how a trained singer can shatter a wine glass by singing a specific note.

The Caltech team found at certain frequencies, low-intensity ultrasound caused the cellular skeleton of cancer cells to break down, while nearby healthy cells were unscathed.

“Just by tuning the frequency of stimulation, we saw a dramatic difference in how cancer and healthy cells responded,” Mittelstein said. “There are many questions left to investigate about the precise mechanism, but our findings are very encouraging.”The researchers hope their work will inspire others to explore oncotripsy as a treatment that could one day be used alongside chemotherapy, immunotherapy, radiation and surgery. They plan to gain a better understanding of what specifically occurs in a cell impacted by this form of ultrasound.

Written by:

Phawin Phanudom (Gun)
Acoustical Engineer

Geonoise (Thailand) Co., Ltd.
Tel: +6621214399
Mobile: +66891089797
Web: https://www.geonoise.com
Email: phawin@geonoise.asia

Credit: Publishing AIP

Categories
Asia Noise News

Dogs Can Experience Hearing Loss

Just like humans, dogs are sometimes born with impaired hearing or experience hearing loss as a result of disease, inflammation, aging or exposure to noise. Dog owners and K-9 handlers ought to keep this in mind when adopting or caring for dogs, and when bringing them into noisy environments, says Dr. Kari Foss, a veterinary neurologist and professor of veterinary clinical medicine at the University of Illinois at Urbana-Champaign.

In a new report in the journal Topics in Companion Animal Medicine, Foss and her colleagues describe cases of hearing loss in three working dogs: a gundog, a sniffer dog and a police dog. One of the three had permanent hearing loss, one responded to treatment and the third did not return to the facility where it was originally diagnosed for follow-up care.

The case studies demonstrate that those who work with police or hunting dogs “should be aware of a dog’s proximity to gunfire and potentially consider hearing protection,” Foss said. Several types of hearing protection for dogs are available commercially.

Just as in humans, loud noises can harm the delicate structures of a dog’s middle and inner ear.

“Most commonly, noise-induced hearing loss results from damage to the hair cells in the cochlea that vibrate in response to sound waves,” Foss said. “However, extreme noise may also damage the eardrum and the small bones within the inner ear, called the ossicles.”

Pet owners or dog handlers tend to notice when an animal stops responding to sounds or commands. However, it is easy to miss the signs, especially in dogs with one or more canine companions, Foss said.

“In puppies with congenital deafness, signs may not be noticed until the puppy is removed from the litter,” she said.

Signs of hearing loss in dogs include failing to respond when called, sleeping through sounds that normally would rouse them, startling at loud noises that previously didn’t bother them, barking excessively or making unusual vocal sounds, Foss said. Dogs with deafness in one ear might respond to commands but could have difficulty locating the source of a sound.

If pet owners think their pet is experiencing hearing loss, they should have the animal assessed by a veterinarian, Foss said. Hearing loss that stems from ear infections, inflammation or polyps in the middle ear can be treated and, in many cases, resolved.

Hearing-impaired or deaf dogs may miss clues about potential threats in their surroundings, Foss said.

“They are vulnerable to undetected dangers such as motor vehicles or predators and therefore should be monitored when outside,” she said.

If the hearing loss is permanent, dog owners can find ways to adapt, Foss said.

“Owners can use eye contact, facial expressions and hand signals to communicate with their pets,” she said. “Treats, toy rewards and affection will keep dogs interested in their training.” Blinking lights can be used to signal a pet to come inside.

Hearing loss does not appear to affect dogs’ quality of life, Foss said.”A dog with congenital hearing loss grows up completely unaware that they are any different from other dogs,” she said. “Dogs that lose their hearing later in life may be more acutely aware of their hearing loss, but they adapt quite well. A dog’s life would be significantly more affected by the loss of smell than by a loss of hearing.”

Written by:

Pitupong Sarapho (Pond)
Acoustical Engineer

Geonoise (Thailand) Co., Ltd.
Tel: +6621214399
Mobile: +66868961299
Email: pond@geonoise.asia

Credit: Diana Yates, University of Illinois at Urbana-Champaign

Categories
Asia Noise News Building Accoustics

Railway Noise

Rail transport or train transport is one of the main transportation modes these days, both for transferring passengers and goods. Every day people commute to work and back home using trains in a form of subway systems, light rail transits and other types of rail transport. These types of system can create noise both to the passengers inside of the train as well as to the environment. In this article, we will discuss about noise source components that we hear daily both inside and outside of the train.

If we pay attention to the noise when we are on board of a train, there are more than one noise source that we can hear. The main sources for interior noise in a train are turbulent boundary layer, air conditioning noise, engine/auxiliary equipment, rolling noise and aerodynamic noise from bogie, as illustrated in the following figure.

By the way, we wrote and recorded the sound of Jakarta MRT. You can see the link below to help you imagine the train situation better.

Exploring Jakartan Public Transportation Through The Sound

Rolling noise is caused by wheel and rail vibrations induced at the wheel/rain contact and is one of the most important components in railway noise. This type of noise depends on both wheel and rail’s roughness. The rougher the surface of both components will create higher noise level both inside and outside of the train. To be able to estimate the airborne component from the rolling noise, we must consider wheel and track characteristics and roughness.

Another noise component that contributes a lot to railway noise is aerodynamic noise which can be caused by more than one sources. These types of sources may contribute differently to internal noise and external noise. For example, aerodynamic noise contributes quite significantly at lower speeds to internal noise while for external noise, it doesn’t contribute as much if the train speed is relatively low. For example, on the report written by Federal Railroad Administration (US Department of Transportation), it is stated that aerodynamic sources start to generate significant noise at speeds of approximately 180 mph (around 290 km/h). Below that speed, only rolling noise and propulsion/machinery noise is taken into consideration for external noise calculation. In addition to external noise, machinery noise also contributes to the interior noise levels. This category includes engines, electric motors, air-conditioning equipment, and so on. 

To perform the measurements of railway noise, there are several procedures that are commonly followed. For measurement of train pass-by noise, ISO 3095 Acoustics – Railway applications – measurement of noise emitted by rail bound vehicles, is commonly used. This standard has 3 editions with the first published in 1975, and then modified and approved in 2005 and again in 2013. The commonly used measures for train pass-by are Maximum Level (LAmax), Sound Exposure Level (SEL) and Transit Exposure Level (TEL).

For interior noise, the commonly used test procedure is specified in ISO 3381 Railway applications – Acoustics – Measurement of noise inside rail bound vehicles. This procedure specifies measurements in few different conditions such as measurement on trains with constant speed, accelerating trains from standstill, decelerating vehicles, and stationary vehicles. 

Written by:

Hizkia Natanael

Acoustical Design Engineer

Geonoise Indonesia

hizkia@geonoise.asia

Reference:

D. J. Thompson. Railway noise and vibration: mechanisms, modelling and means of control. Elsevier, Amsterdam, 2008

Federal Railroad Administration – U.S. Department of Transportation, High-Speed Ground Transportation Noise and Vibration Impact Assessment. DOT/FRA/ORD-12/15. 2012

Categories
Asia Noise News

Acoustics Glossary

Get a better understanding of acoustics with our glossary of terms. Let Geonoise Asia help you solve your noise problems today!

Arranged by:

Adetia Alfadenata

Acoustic Engineer

Geonoise Indonesia

support.id@geonoise.asia

Categories
Asia Noise News

How Sound Wave Work Underwater?

Do You Know?

Acoustics is not only about sound propagation in the air, but also its propagation in the water. The study of sound propagation and how it behaves in water is called underwater acoustics. Underwater acoustics is a branch of science, and it has become a technology that has been used since World War I. Even before that, in 1490, Leonardo da Vinci has stated his theory in an article “if you stop your ship in the ocean and you put one side of a long tube into the water, then put your ear in the other side, you’ll listen the sound of the ship from great distance.” This indicates that underwater acoustics technology is already known for long time. 

In World War II, in military cases, underwater acoustics was used as a communication platform to channel information through the water. In 1925, underwater acoustics was used to measure ocean depth based on sound waves obtained — one of its usability is to find the plane crashed into the bottom of the sea. Time went by and many technologies developed and researches were performed.

One of the applications which can also be used for fisherman is fish-finder navigation tools. These tools can be used for fishers to find schools of fish in the ocean. We can also know the distance and the position of the school of fish from the ship based on the frequency range of the sound propagated.

In the industry, underwater acoustics has been applied to determine the presence of oil and gas in the sea. The method used is quite effective and efficient. In disaster management, early detection of a tsunami from the sea has been developed based on the propagation of infrasound detected from the seabed. In recent years, one technology that has attracted interest in many studies is the Autonomous Underwater Vehicle (AUV). AUV is an unmanned underwater vehicle, where the AUV can identify underwater biology and physics. The use of AUV can be the best choice in identifying the shape conditions of coastal waters because it can be operated in the long run. Besides, the use of AUV can also avoid damage to coral reefs and marine ecosystems.

The necessity for underwater research is quite high, especially for countries with vast oceans, such as Indonesia. Underwater acoustic research is needed in mining operations, observations on coral reefs, offshore oil exploration, and sea accidents.

The speed of a wave is the rate at which vibrations move through the medium. Sound moves at a faster pace in water and with long-distance than in air because the mechanical properties of water differ from the air. We know that the speed of sound wave propagation in the air is between 333 m/s and 340 m/s, the speed of sound waves in water is four times faster than the speed of sound in the air. The speed of sound waves in water ranges from 1500 m/s to 1520 m/s. We know that sound propagation occurs because of the ups and downs of particles in a medium. At sea, the deeper the depth of the sea, the higher the pressure. High-pressure water particles will be compressed so that they continue to propagate the sound without losing much energy. Besides, the density in water is higher than the density in the air. This causes the sound can travel fast and far away in the water. Unfortunately, the speed of sound in seawater is not a constant value. It varies by a small amount (a few percent) from place to place, season to season, morning to evening, and with water depth. Although the variations in the speed of sound are not large, they have important effects on how sound travels in the ocean. However, the temperature in seawater also affects the speed of sound waves, warm water travels faster and farther than colder water. 

There are three layers in the sea, based on its temperature, namely mixed water, thermocline, and deep water. In the thermocline, temperature decreases rapidly from the mixed upper layer of the ocean to much colder deep water. In the thermocline, the speed of sound waves decreases with the depth of the sea. In the layer below the thermocline, the temperature becomes constant again, and the pressure increases. In this layer, the speed of the sound waves again increases with the depth of the sea.

Temperature ⇢

As we know, wavelength is inversely proportional to frequency.

As can be seen in the equation above, the lower the frequency the longer the wavelength. Therefore, a 20 Hz sound wave is 75 m long in the water whereas a 20 Hz sound wave in air is only 17 m long in the air. Generally, the sensor used to capture underwater sound is a hydrophone or underwater microphone.

Decibels as a unit of sound pressure is the ratio between the pressure measurement and the reference pressure. Note that the reference pressure in the air with water is different. Therefore, 150 dB of sound in water is not the same as 150 dB of sound in air. In air, the reference pressure is 20μPa while in water the reference pressure is 1μPa. Based on the Sound Pressure Level equation, the conversion value of dB in air to water is

The characteristic impedance of water is about 3600 times that of air then

Therefore, the air to water conversion factor is

For example, if the sound of a jet engine in the air is 135 dB then the water is 197 dB in water.

Written by:

Adetia Alfadenata

Acoustic Engineer

Geonoise Indonesia

support.id@geonoise.asia

Reference:

  • Urick, Robert J.1983.” Principal of Underwater Sound/3rd Edition”.McGraw-Hill Book Company
  • Nieukirk, Sharon.” Understandig Ocean Acoustic”.NOAA Ocean explorer Webmaster
  • Singh H, Roman C, Pizarro O, Eustice R. Advances in High Resolution Imaging from Underwater Vehicles. In: Thrun S, Brooks R, Durrant-Whyte H, editors. Robotics Research. vol. 28 of Springer Tracts in Advanced Robotics. Springer Berlin Heidelberg; 2007. p. 430–448
  • Pike, John.  “Underwater Acoustic”. Diakses secara online melalui https://fas.org/man/dod-101/sys/ship/acoustics.htm
  • Discovery of Sound in the Sea.”How does sound in air differ from sound in water?” diakses secara online melalui https://dosits.org/science/sounds-in-the-sea/how-does-sound-in-air-differ-from-sound-in-water/
Categories
Asia Noise News Building Accoustics

The Colors of The Noise

Sound is a collection of random signals that have certain physical characteristics that depend on the sound source. One of the physical characteristics of sound can be seen from the spectrum formed. There is a lot of noise that can be distinguished based on the spectrum character, such as White Noise, Pink Noise, Brownian Noise, Blue Noise, Violet Noise, Gray Noise, and others. In general, what is often used is White Noise, Pink Noise, and Brownian Noise both in measurement and audio testing.

Many people are very familiar with White Noise, usually, the static sound from the Air Conditioner that delivers us to sleep by disguising background noise is always considered White Noise even though technically what we hear from the Air Conditioner fan rotation is not White Noise. Many of the sounds we associate with White Noise are actually Pink Noise, Brownian Noise, Green Noise, or Blue Noise. In the world of audio engineering, there are various types of noise colors with their own unique spectrum, this is produced to give a rich impression on music arrangements, relaxation, and so forth. So, this article will explain that static noise is not always White Noise.

Here are some sound colors that are quite familiar and often discussed in the world of audio engineering:

  1. White Noise

The most commonly mentioned noisy color in everyday life is White Noise. White Noise is called “White” as a symbolization of a white light containing all frequencies evenly or flatly in mathematical calculations. It is said mathematically because, in reality, it is not perfectly flat. The White Noise calculation pattern is evenly distributed if it is calculated using the following equation:

So in the case of White Noise, the signal power becomes:

The resulting spectrum is in the form of a constant straight line like the following graph,

Keep in mind that the graph shown is a logarithmic function and not a linear function where the frequency range at high frequencies is wider than the frequency range at low frequency. Here is a White Noise that can be heard:

2. Pink Noise

Proportionally the pink noise spectrum is seen to decrease on a logarithmic scale but it has equal power in bands that are proportionally wide. This means that pink noise would have equal power in the frequency range from 40 to 60 Hz as in the band from 4000 to 6000 Hz. Since humans hear in such a proportional space, where a doubling of frequency (an octave) is perceived the same regardless of actual frequency (40–60 Hz is heard as the same interval and distance as 4000–6000 Hz), every octave contains the same amount of energy and thus pink noise is often used as a reference signal in audio engineering. The spectral power density, compared with white noise, decreases by 3 dB per octave (density proportional to 1/f ). For this reason, pink noise is often called “1/f noise”. Some people associate pink with red and white where pink is brighter than red and fainter than white so that it is described as a decreased spectrum with values close to a ~ 1. Mathematically, Pink Noise can be calculated using the formulation below:

The depiction of the curve produced by Pink Noise is as follows:

Pink Noise will heard like the following audio file below,

3. Brownian Noise (Red Noise)

Brownian Noise color has several names, some people call it Brown Noise, Brownian Noise, or Red Noise. Brownian was discovered by Robert Brown, the inventor of Brownian Motion (Random Walk or Drunkard’s Walk) where the Noise produced by Brownian Motion is the same as Red Noise / Brown Noise. Described as a red light that is darker than Pink and White, the spectrum formed has the characteristic of a sharp decrease that exceeds a decrease in Pink Noise (1 / f2 or a decrease of 6 dB per octave). Visually the Red Noise value is the boundary of the Pink Noise, together with the White Noise, so the spectrum curve formed is as follows:

Brownian Noise will sound like the following audio file  below:

4. Blue Noise (Azure Noise)

If Red Noise and Pink Noise have a decreased character, then Blue Noise is the opposite. Blue Noise has an uphill spectrum curve characteristic that is inversely proportional to Pink Noise. Blue noise’s power density increases 3 dB per octave with increasing frequency (density proportional to f ) over a finite frequency range. In computer graphics, the term “blue noise” is sometimes used more loosely as any noise with minimal low-frequency components and no concentrated spikes in energy. This can be a good noise for dithering. Cherenkov radiation is a naturally occurring example of almost perfect blue noise, with the power density growing linearly with frequency over spectrum regions where the permeability of the index of refraction of the medium is approximately constant. The exact density spectrum is given by the Frank–Tamm formula. In this case, the finiteness of the frequency range comes from the finiteness of the range over which a material can have a refractive index greater than unity. Cherenkov radiation also appears as a bright blue color, for these reasons.

The curve produced by Blue Noise is as follows:

Blue Noise will sound like the following audio file  below:

5. Violet Noise (Purple Noise)

If Blue Noise is the opposite of Pink Noise, then Violet can be categorized as the opposite of Red or Brownian Noise. This can be seen from the addition of the power density of Violet Noise which is 6 dB per octave with increasing frequency value. The proportional density of Violet Noise or often also called Purple Noise is f2 over a finite frequency range. Violet Noise is also known as differentiated white noise, due to its being the result of the differentiation of a white noise signal.

The curve produced by Violet Noise is as follows:

Violet Noise will sound like the following audio file  below:

6. Grey Noise

Gray Noise is a randomized White Noise that is correlated with the same psychoacoustic noise curve or can be said to be an inverse A-weighting curve, with a specific frequency range that gives the impression or perception that this sounds equally loud at all frequencies. This is in contrast to standard white noise which has equal strength over a linear scale of frequencies but is not perceived as being equally loud due to biases in the human equal-loudness contour.

The curve produced by Grey Noise is as follows:

Grey Noise will sound like the following audio file  below:

Written by:

Betabayu Santika

Acoustic Design Engineer

Geonoise Indonesia

Beta@geonoise.asia

Sources:

Pics: Noise Curves By Warrakkk – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=19274696

Hartmann, William M. Signals, sound, and sensation. Springer Science & Business Media, 2004.

“Federal Standard 1037C”. Institute for Telecommunication Sciences. Institute for Telecommunication Sciences, National Telecommunications and Information Administration (ITS-NTIA). Retrieved 16 January 2018.

Lau, Daniel Leo; Arce, Gonzalo R.; Gallagher, Neal C. (1998), “Green-noise digital halftoning”, Proceedings of the IEEE, 86 (12): 2424–42, doi:10.1109/5.735449

Joseph S. Wisniewski (7 October 1996). “Colors of noise pseudo FAQ, version 1.3”. Newsgroup: comp.dsp. Archived from the original on 30 April 2011. Retrieved 1 March 2011.

Categories
Asia Noise News Building Accoustics Noise and Vibration Product News

The Nano-guitar String that Plays Itself

Scientists at Lancaster University and the University of Oxford have created a nano-electronic circuit which vibrates without any external force.

Using a tiny suspended wire, resembling a vibrating guitar string, their experiment shows how a simple nano-device can generate motion directly from an electrical current.

To create the device, the researchers took a carbon nanotube, which is wire with a diameter of about 3 nanometers, roughly 100,000 times thinner than a guitar string. They mounted it on metal supports at each end, and then cooled it to a temperature of 0.02 degrees above absolute zero. The central part of the wire was free to vibrate, which the researchers could detect by passing a current through it and measuring a change in electrical resistance.

Just as a guitar string vibrates when it is plucked, the wire vibrates when it is forced into motion by an oscillating voltage. This was exactly as the researchers expected.

The surprise came when they repeated the experiment without the forcing voltage. Under the right conditions, the wire oscillated of its own accord.

The nano-guitar string was playing itself.

Lead researcher Dr Edward Laird of Lancaster University said: “It took us a while to work out what was causing the vibrations, but we eventually understood. In such a tiny device, it is important that an electrical current consists of individual electrons. The electrons hop one by one onto the wire, each giving it a small push. Usually these pushes are random, but we realized that when you control the parameters just right, they will synchronize and generate an oscillation.”

So what note does the nano-guitar play?

“The nanotube is far thinner than a guitar string, so it oscillates at much higher frequency — well into the ultrasound range so no human would be able to hear it.

“However, we can still assign it a note. Its frequency is 231 million hertz, which means it’s an A string, pitched 21 octaves above standard tuning.”

The nano-oscillator could be used to amplify tiny forces, such as in novel microscopes, or to measure the viscosity of exotic quantum fluids. These experiments will be pursued in a new laboratory that Dr Laird is setting up in the Physics Department at Lancaster, supported by a €2.7M grant from the European Union.

Credit: https://www.lancaster.ac.uk/news/the-nano-guitar-string-that-plays-itself

Written by: Phawin Phanudom