Search Results for: gateway
Final Results of Gateway EVP Project Phase 1
It’s official! We’ve reached our goal of 1,000 usable samples for the Gateway EVP Study. We received a total of 1,383 files to analyze for potential samples. We would like to extend our deepest gratitude to everyone who assisted us with this study. I would like to take this time to recognize the following people for their generous contributions to IRG’s Gateway Project:
- Jim Pace with Sooner Paranormal of Oklahoma;
- Kenny Plank with Family Haunts;
- Melissa Tanner with TnT Paranormal;
- Matt Schenk;
- Chris Pollock;
- Kristine McCracken;
- Jason/ Ty Phillips; and
- Светлана Мусатова, my Russian contributor.
I would also like to specifically recognize:
- Chad Stambaugh for his contributions to IRG’s Gateway Project; and
- Melissa Tanner with TnT Paranormal for her assistance with identifing a critical error in our process that could have cost us the entire project.
The results of the Gateway EVP Study are as follows:
Of the 1,383 samples, 72% were deemed usable based on our criteria while 28% were excluded due to IRG’s WIND Principle. Of that 72%:
- .5% occurred in the TLF range;
- 9.1% occurred in the ELF range;
- 42.5% occurred in the SLF range;
- 44.2% occurred in the ULF range;
- 3.7% occurred in the VLF range;
- 3.7% occurred below 15 Hz;
- 96.5% occurred within the range of 15 Hz and 20 kHz; and
- 45.9% occurred within 300 Hz and 4 kHz.
The mean frequency was 596.04 Hz in the Ultra-Low Frequency (ULF) range.
Update Conclusions:
- Usable samples indicated the dominant range was almost half-and-half between the SLF and ULF ranges of the electromagnetic spectrum with the average frequency occurring in the ULF range.
- We used the period and frequency equations for our calculations:
a. T = 1/f; and
b. f = 1/T.
We did not use the wavelength formula because we were not concerned with the distance of the waves at this time.
- The fact that no single usable sample has exceeded the low end of VLF remained constant. No sample exceeded 7,200 Hz.
Update Recommendations:
- IRG will continue to accept potential EVP files to be included in this study to continue to refine our results.
- We are continuing to collaborate with experts in sound engineering, physics, and other fields to ensure the accuracy of our results.
- During investigations, it would be advantageous for groups to calculate the most common frequency per location and adjust settings on equipment accordingly. For instance, default settings on digital recorders need to be adjusted to either the low- or high-pass filters, allowing only low or high frequencies at a given time for maximum results.
- IRG can provide consulting services, free of charge, to groups or individuals who wish to conduct their investigations in this manner. If you wish to seek our assistance for this purpose or wish to send possible EVP samples, please send us an email on Facebook.
The Gateway EVP Project Phase 1 Methodologies
We’ve had a lot of questions and comments about Gateway since we posted the unofficial final results here on our page in July of 2013. I say “unofficial” because Gateway is still on-going but there were many requests for periodic updates during phase 1 of the project – the study itself. Phase 2, which is currently underway, is the EVP experiment.
I wanted to take a minute to give everyone a little breakdown of some of our methodologies. I can only divulge certain things because we are still under non-disclosure. I will provide as much information as possible without violating the contract.
Over the years, we heard many questions regarding the phenomenon of EVPs. Our cynosure of the occurrences was the common hypotheses did not seem to match the observations accurately. In fact, the hypotheses, when considered closely, seemed to be a case where fundamentally incorrect assumptions were based on simple observations. One of the most commonly accepted ideas was EVPs were not heard at the time they were recorded because they are sound waves occurring at frequencies below the range of audible human hearing. Hence, the members of IRG, set out to find out if this assumption was accurate. Additionally, we questioned whether or not there was a specific range of frequencies applicable to EVPs like that of speech and hearing. Was there a minimum and maximum range they could or did occur in? There were additional questions we wanted to explore as well. Thus, the Gateway EVP Project was born.
We began the study researching the phenomenon and all that entailed. Based on our understanding of the characteristics, behaviors, and functions of various waves and their frequencies, we outlined the parameters of the study and set out to gather samples.
We spent months working on criteria and parameters. The very first thing we had to establish was a definition for the phenomenon we were attempting to study. For the purposes of the study, we defined EVP as anomalous recordings captured on electronic media that are not heard at the time of recording. We also further defined EVP for this study as that which was recorded on either a digital or analog voice recording device. After defining EVP for the purposes of the study, we next had to identify all of the criteria. As a result, we generated the Guidelines For Inclusion (GFI):
1. It had to be EVP only; Verbal or vocal in nature. Examples of EVPs that were vocal or verbal in nature included, but were not limited to:
a. Words, phrases, sentences;
b. Screams, yells, screeches;
c. Laughs or giggles;
d. Coughs; and
e. Crying.
2. It had to be obvious, in every sense of the word, that those present during the recording did not hear it at the time. If there was any indication that someone in the recording “heard something”, tags something, or says anything even remotely close to “did you hear that”, regardless of to what it may have been referring, the entire recording was excluded from the study. Any acknowledgement was excluded – “did you hear that”, “what was that”, “that was loud”, “was that outside”, etc. This was to avoid potential contamination of the results and ensure accuracy.
3. IRG’s WIND principle was to be remembered at all times. The WIND principle simply states When IN Doubt, throw it out. It is the same principle many groups use when analyzing evidence. If the operator doubted or questioned the authenticity, reliability, or validity of any sample at any time, the entire sample was excluded.
4. Samples had to be original copies only with no alterations of any kind. It had to be the raw recordings. No enhancements, no pitch adjustments, etc. Absolutely NO changes of any kind. Any sample in doubt was completely excluded.
5. Samples had to be audio files only from a digital or analog voice recorder. No audio from any other type of equipment was included.
6. We required specific information about the sample, which included, but was not limited to:
a. Date & time of recording;
b. Make & model of the equipment it was recorded on; and
c. Names of those present during the recording.
7. Samples had to be submitted in only one of two file formats: either mp3 or wav.
8. Samples had to be submitted via email. We did not accept any samples from other sources, such as a website. It had to be an emailed sample to an email address we provided to them.
9. The samples had to be emailed by the original owner of the files unless otherwise permitted. Owners maintained all rights to their submissions and no information, other than that which was immediately relevant to the study, was to be released to third-parties without prior written consent of the owner of the file and the IRG director.
10. For the purposes of the study, we were not concerned with content (“what was being said”) in the recording.
11. At least three people had to agree that there was “something said” present in the recording. Those three could only be from IRG or from one of our partners assisting with the project. If we were not absolutely positive there was something anomalous present, the sample was excluded.
12. Samples had to be analyzed under IRG’s Objectivity and Impartiality Principle.
13. We only included samples where the potential EVP was absent any other type of noise or voices. For instance, an EVP that came in on top of an investigator’s voice would be excluded. If the voice occurred between investigators speaking, and no other investigator voices occurred at the time, it would have been included, assuming it met the other conditions.
If any sample did not meet all of these stipulations, it was excluded.
Prior to beginning the study, we designed an EVP Intake Form to document individual EVPs and their file information. Additionally, we established an EVP Master Database to record the information obtained from the intake form.
When we completed the criteria and parameters, we next set out to collect samples. A mass notice was sent out on various social media to request samples be sent in and provided the information on how to do that. Soon, samples started pouring in. We listened to the submitted samples and analyzed them for possible inclusion, making sure they satisfied all of the mentioned criteria. Once it was determined the sample was, in fact, usable, we analyzed the waveform. We then manually calculated the period and frequency of each sample.
** And before anyone says it – Yes, we are aware that audio recordings contain many different frequencies and we did take that into consideration. Our goal was to determine a potential range of frequencies rather than trying to isolate individual ones.**
Each sample, regardless of usability, was assigned a file number and corresponding information was loaded onto the database. Our goal was to have at least 1,000 viable samples included from which to base the results as we felt this would be a sufficient representation. Of course, we are continuing to work on this study (refining as necessary) to make the results even more accurate. The database was generated in an Excel spreadsheet with several cells formatted with various formulas for automatic generation of data when certain information was recorded. This also allowed us to automatically generate results and graphs as needed to finalize the results and prevent, as much as possible, operator error.
I hope this answers many of your questions. We welcome constructive criticism so please feel free to play devil’s advocate. Afterall, any information that helps us is greatly appreciated.
Please note: While we certainly welcome and encourage constructive criticism, we will not tolerate disrespectful manners or language. This is an attempt to scientifically scrutinize one aspect of EVP and expect responses, comments, questions, and feedback to remain respectful.
Gateway EVP Project Descriptive Statistics: A Brief Explanation
11/24/2014
By Theresa Byess
We have had some questions regarding the publication of our Descriptive Statistics, rather than Inferential Statistics, for the Gateway project. ( A table of the DS is available at the bottom of this page.) The primary question, of course, being “what are they and what do they mean”? I wanted to take this opportunity to explain a little about why we published this data and what it means for the project.
First, our intention was not to “infer” as such is the case with Inferential Statistics. Descriptive statistics are used to summarize the basic features of the data in the study in which they are produced. We used descriptive statistics because the purpose of the study was to generate quantitative data, meaning we were attempting to quantify the hypothesis by generating numerical data that can be transformed into useable statistics. We were attempting to generalize results from a larger sample population. Our purpose of the study was to formulate facts and ascertain any patterns or correlations. Because we were testing pre-specified concepts and hypotheses surrounding the theories of EVPs, we used quantitative, rather than qualitative, methods. It is a deductive process, meaning we logically deduced based on general statements. We examined potential possibilities to reach a specific logical conclusion. We also used this method because it is more objective than qualitative methods, using statistical information analyzed as the basis for conclusions. This method provides an “overall” point of view based on this statistical information. The descriptive statistics simplifies this information.
So, now that you know why we used this particular method of research and analysis, I can begin to explain what the statistical information means.
The mean, median, and mode are often referred to as the central tendency, which is an attempt to describe what the typical data might look like. They can be thought of all different forms of expressing an average for the data. The mean is the most common form of expressing the central tendency and is also the reason it appears first in the statistical representation of our data. It can be viewed as the true average for it – the sum of all of the useable frequencies (adding together all the useable samples we collected) and dividing that sum by the total number of frequencies to reach our average. Here’s an example:
If our frequencies were 5, 10, 18, 26, 3, and 110, the sum of these numbers would be 172. We have a total of 6 numbers in that set. Hence, we divide to reach our average:
Therefore, our mean, or average, for this set of data would be 28.67 Hz. The concept is not difficult. The mean for our data was 596.04 Hz.
The median is nothing more than the middle value of the set of data we had. The median for our data set was 250 Hz. The median can be difficult to manually calculate, especially when there is such a large set and also when that set is in even numbers.
All of these calculations were predetermined based on formulas, which automatically calculated this information when certain points of data were entered into the database. This allows our results to more accurate.
The mode is probably the most important to us here at IRG with this study simply because it represents the most commonly occurring number in the data set, which in our case was 1,000 Hz. In other words, 1,000 Hz occurred more frequently than any other frequency range within the data set. Hence, I can use this number to predict future behavior. I can predict, then, EVPs will most likely fall within the 1,000 Hz Ultra Low Frequency (ULF) range assuming the results from our calculations remain constant.
We also had to factor in the uncertainty of our measurements because there are so many factors potentially at play, such as interference. There are two ways of statistically representing that uncertainty – standard error (AKA standard deviation of the mean) and standard deviation (AKA standard deviation of a single measurement). This data describes how “spread out” it was.
Our mean frequency, for instance, was 597 (rounded for example) out of the 1,000 useable samples. However, not all of the frequencies were 597. Some were lower, others were higher. The standards describe to us how spread out this information was. In this case, it was sample standard deviation rather than population standard deviation. All we had was a sample but wished to make a general statement.
I should note here that while I have some background in statistical analysis, I am by no means an expert. The information I am providing is based on my limited knowledge and we are attempting to send this information to statistical experts for their analysis of the data.
Having said that, I believe the standard error, for this particular study, is irrelevant. However, I will know more about this statistic once we have received that information.
Here’s a great example of my lack of knowledge in this area. I understand that a lower standard deviation usually means that the values in the data are closer to the mean on average and a larger standard deviation means the values in the data are farther away from the mean on average. The fact that I believe we have a larger standard deviation simply means there is a larger amount of variation in the samples being studied. Because the frequency range is generalized, the variation is higher. However, if we were to focus on a smaller set, such as the ULF and SLF, the dominant ranges of data, our standard deviation would be a lot less, reflecting a smaller data set. Our standard deviation of 945.51 (rounded) reflects the fact the variance from the average is higher. I also know the closer the standard deviation is to the mean, the more reliable the mean can be considered. In this case, it would appear the mean is not reliable given the standard deviation. However, this is something we are attempting to verify from statistical experts at Princeton and will update this information once it has been received.
Sample variance reflects the variance within our sample size. We took a sample of the total population, 1,000 useable samples in this case, and used that sample size to estimate the frequencies of the entire population, or in this case EVPs. The sample variance helps us determine how spread out the frequencies are. Again, while I understand this concept, I am not sure how to apply this knowledge to our results, which we have submitted for a more professional insight.
I think perhaps the single most important statistic in the set is that of the confidence interval. Using the 95% confidence level interval, it is the most useful way to support the reliability and validity of the results we are showing. Reliability, of course, refers to repeatability. We believe our results are consistent and have a representative sample that is a true reflection of that which we were researching – EVPs. Our conclusions are based on the idea that a smaller confidence level compared to the mean is close to its true value, allowing us to have confidence in it.
However, it should be noted here that reliability does not necessarily mean our conclusions are valid. It merely means it is more likely. This is the reason for phase 2 of the Gateway EVP Project – to test our conclusions based on this information.
Kurtosis and Skewness are, to our understanding, more mathematical and graphical representations of the data, in which we have an extremely limited understanding. This information is currently being examined by multiple professional sources specifically trained in the area of statistical analysis. Additional information will be update once it becomes available.
The Minimum and Maximum, I believe, is a representation of the lowest and highest points of data for our set. 1.2 Hz (rounded) being the lowest while 7143 Hz (rounded) is the highest.
The Sum, of course, is pretty self-explanatory. It is the sum of the entire data set.
The Count is how many samples were included and calculated.
I hope this helps our readers understand a little more about the descriptive statistics and what the purpose for them is. If you have any questions, comments, or concerns, please feel free to let us know. We will post additional information as it becomes available.
Logical Fallacies of the Television Show “Ancient Aliens” and A Critique of The Video “Ancient Aliens Debunked” By Chris White
Written by IRG Director Theresa Byess
12/27/2014
I just spent the past 3 hours watching one of the single best videos I have ever seen on the Internet. You can access the video by clicking here. It can also be found on YouTube by typing in “Ancient Aliens Debunked Chris White”.
At IRG, we pride ourselves on our fervent objectivity when researching the beliefs and claims of others. Having said that, I wish I could personally shake this guy’s hand. This video is incredible and I would advise everyone to watch it. When Ancient Aliens aired its first ever episode, I began watching the show with an open-mind and open-heart. Almost immediately, I was utterly disgusted and could not stomach watching anymore. The entire show has been inundated with outright lies and erroneous fallacies, some which are explained in this video. Now, let me first mention the narrator is adamant about trying to remain objective as well in his analysis of the claims in the show and certainly does a great job. He presents clear proof, not evidence but proof, of the lies and inaccuracies from both those who are presented on the show and by the show itself. He presents FACTS. Facts are defined as something that is indisputable and a truth by actual experience or observation. I also want to mention that the show does not present any true experts so far as I can tell. There are several doctors and PhD holders but none of them are archeologists, anthropologists, etc. They appear to be predominantly authors of books. However, I have not watched a single episode from start to finish because I cannot take the lies. So, if I find out I am wrong about that, it will be retracted.
I would also urge everyone to visit the page of the expert presented in this video – Dr. Heiser. His website is located here. The majority of believers in the ancient aliens theories garner their conclusions, beliefs, and opinions, from the writings of Zecharia Sitchin, who, after careful scrutiny, has been completely and utterly discredited. The same applies to Erich von Däniken, someone featured on just about every episode so far as I can tell. Unfortunately, this gentlemen appears to have absolutely no idea what he is talking about. Even if I had not seen this video, the outright false claims Erich von Däniken makes about certain biblical references would have been enough. One such case is mentioned in this video toward the end about artificial insemination of Betenos, mother of Noah, and Enoch, his ancestor. This is just one of many claims that are just plain false and there is no other way to say it. And this is unfortunate.
I know there is currently no law of nature, physics, or mathematics which prevents the possibility of advanced civilizations on other planets in the universe/multiverse. In fact, it is quite the opposite, although the common belief by many scientists is that life on other planets probably exists on a much smaller scale, such as micro-organisms, so far as I know. Yes, there are pyramids and ziggurats all over the world with eerily similar characteristics, such as electromagnetic energies being channeled up and out into the sky or in the style of the construction itself. Yes, there striking similarities and characteristics. Does that mean they were created by aliens? No.
Think about this logically for a moment. We are talking about humans without television, cell phones, tablets, computers, etc. We are talking about humans who spent their entire existence studying and observing the world around them, including the heavens. Creating massive structures with unparalleled precision. Predicting the motion of the heavens with extreme accuracy. In the words of Theoretical Physicist Dr. Michio Kaku, “even back then, 2,000 years ago, they [referring to the ancients and the Greeks] knew the earth curves and by looking at the shadows, they calculated the size of the earth to within about ten percent accuracy. They actually calculated the distance from the earth to the Moon and the rough dimensions of the distance from the earth to the Sun. So, in other words, the ancients were no fools”. This actually coincides with statements mentioned on the video and previous beliefs regarding the idea the earth was flat. Proponents of ancient alien theories claim medieval art depicts a rounded rendition of the earth but the common belief during those times was that the earth was flat. Here, they are referring to the Middle Ages, which if my high school memory serves me accurately, was from about 500-1600 AD. So, at what point during that time did the knowledge of the curvature of earth, mathematics, and observations of shadows change to a earth-is-flat mentality when the Greeks had already proven otherwise long before that time?! This is extremely hard to digest despite the understanding that a lot of the knowledge and wisdom of the Greeks was lost when the empire fragmented.
Admittedly, I am absolutely furious with the underestimation of human potential in just about every single field out there – scientific or otherwise.
Another thing I would like to mention. The video touches on this as well, mentioning the pyramids of Giza being constructed of granite. First, as far as my research indicates, the pyramids are not constructed entirely of granite as some believe. You can find this information on Geology.com if you would like to reference it. Granite is formed from the “slow crystallization of magma beneath the earth’s surface”. The blocks that construct the outside of the pyramids of Giza are actually made up of limestone and sandstone. Limestone is also called calcium carbonate because it is mostly formed in water by the fossils of marine animals. To quote Geology.com:
“Limestone is a sedimentary rock composed primarily of calcium carbonate (CaCO3) in the form of the mineral calcite. It most commonly forms in clear, warm, shallow marine waters. It is usually an organic sedimentary rock that forms from the accumulation of shell, coral, algal and fecal debris. It can also be a chemical sedimentary rock formed by the precipitation of calcium carbonate from lake or ocean water.”
Furthermore,
“Most limestones form in shallow, calm, warm marine waters. That type of environment is where organisms capable of forming calcium carbonate shells and skeletons can easily extract the needed ingredients from ocean water. When these animals die their shell and skeletal debris accumulate as a sediment that might be lithified into limestone. Their waste products can also contribute to the sediment mass. Limestones formed from this type of sediment are biological sedimentary rocks. Their biological origin is often revealed in the rock by the presence of fossils.
Some limestones can form by direct precipitation of calcium carbonate from marine or fresh water. Limestones formed this way are chemical sedimentary rocks. They are thought to be less abundant than biological limestones.”
Now, scientists have already discovered that North Africa was once under water. This is no secret. When you examine the blocks that construct the pyramids, you will see that about 40% of each and every block used contains the fossilized remains of tiny marine animals called Nummulites. The pyramids are constructed of different materials. The outside consists almost entirely of limestone quarried from the plateau. The inside, or inner shell, as well as the King’s Chamber, consists of the granite I am sure is being referenced in the video. My guess is probably because of its stability and strength.
There is really nothing else I can say that is not already mentioned in this extensive video. For anyone who is considering these theories based solely on this television show, I would strongly recommend doing your homework first so that you can come to an educated conclusion about what you believe. To start, watch this video.
The one thing I will mention I disapproved of as far as the presentation of information in this video was the use of Wikipedia as a reference. Wikipedia is notoriously unreliable. However, there are a multitude of other credible resources available providing the same information as was presented in the video referenced from Wikipedia.
Rebuttal facts presented about Pumapunku is just one example. In the video, the narrator presents a Wikipedia excerpt describing the composition of the stones of the structures in rebuttal to Erich von Daniken and Giorgio Tsoukalos’ claims. Publications in journals and other sources support the narrator’s position, such as the Journal of Archaeological Science. Archaeology’s Interactive Dig, which can be accessed by clicking the following link, has also posted detailed information regarding the site and excavation information directly contradicting the claims of proponents on the show about this site. Those who are listed as participating in the research of the site include the University of Pennsylvania, Department of Archaeology in Bolivia, University of Wisconsin, University of Denver, Massachusetts Institute of Technology (MIT) in Boston, students from the Bolivian University UMSA, and Harvard. Alexi Vranich, the Director of the Tiwanaku project, in reference to a question about the stone quarries, states, “Ponce Sangines published an extensive study on the origin of sandstones at the Pumapunku temple and ideas on how they were constructed. His book is called Pumapunku. Pierre Protzen’s study is one of the best on the particulars of the masonry and construction method. He should be coming out with a substantial publication on his several years of study on the site”. Now, the Director has also gone on record stating that, yes, the structures are constructed of sandstone, something Daniken and Tsoukalos emphatically deny. Really?
And another correlation to the reference in the video is the comment about the age of the Gateway at 14,000 years old:
“Tiwanaku is a magnet for Atlantis hunters and a variety of new agers. The idea that Tiwanaku is 14,000 years old is based on a rather faulty study done in 1926. Since then, there has been a huge quantity of work both on the archaeology and geology of the area, and all data indicates that Tiwanaku existed from around A.D. 300-500 to 900-1000.
Still, the Atlantis hunters flock to the site. I believe the Discovery Channel is even making another documentary on the possibility that the Andes is the lost continent described by Plato.
As for the elephants and other animals that are supposed to be on the Gateway, I really can’t find them myself. One carving that is frequently cited as an elephant (including by several guides) is in fact a condor”.–Alexei Vranich
As you can tell, the gentlemen in this video certainly did his homework and provided references that could be verified with additional resources as well. Clearly, this is one person who knows what they are talking about, which is unfortunately more than I can say about those in support of these theories. The theories have absolutely no basis in truth and would certainly not hold up in a court of law.
Here is another source of information that supports the truth: Ancient Wisdom, Pumapunku Bolivia. This site confirms the weight of the heaviest stone being about 131 metric tons and not the 800 number provided on the show by Daniken. It also mentions the “sandstone slabs”.
As you can clearly see, the narrator is correct when he states the claims on the show are outright lies and deceptions and that those who are on the show, often viewed as “the experts”, are clearly falsifying information. At best, their credibility is now ruined because they have utterly failed to consider evidence to the contrary. In the paranormal field, we often to refer to such persons as pseudoskeptics or pseudobelievers, meaning they emphatically deny/believe despite evidence to the contrary.
There are logical fallacies involved with those on the show. For those who do not know, logical fallacies are simply errors in logic. It is derived from the Latin term fallere, which means “to deceive”. Some logical fallacies are intentional and some are unintentional. As it should be, the narrator in this video has identified these fallacies, which is an important step in accessing the validity and reliability of the claims. Rightly, the narrator has attacked the validity of the claims and methods rather than the people themselves making his statements more persuasive and credible. He maintains objectivity. Therefore, he is not guilty of ad hominem.
Those on the show are guilty of ad populum, which simply refers to an argument that appeals to the prejudices and emotions of the masses as a method of garnering support for their claims. The language used can act as a smoke screen, hiding a lack of ideas in an argument, which is something we can clearly see on the show when strong contradictory facts are presented in rebuttal.
Another fallacy, the Bandwagon Appeal, is also at play. Just from this video alone, Daniken and Tsoukalos make many references to certain groups of people as a whole believing or thinking one way or another, such as archaeologists not knowing a lot about the site and claiming it still baffles them when, in reality, they have a far better understanding of the site than these two give them recognition for. They are essentially presenting information without weighing or mentioning the evidence of what is being promoted in enough detail.
Other logical fallacies they are guilty of are:
- Begging The Question;
- False Analogies;
- False Dilemma (which is mentioned many times in the video);
- Hasty Generalization;
- Non Sequitor;
- Post Hoc, Ergo Propter Hoc; and
- Stacking The Deck, aka Data Beautification, (perhaps the number one fallacy they are guilty of, which is when only the evidence supporting a premise or belief is presented while disregarding or withholding contrary evidence).
The primary reason I support the information in this video is because it appears to follow the Toulmin model, a model for thinking and responding like a true skeptic. Healthy skepticism provides a logical foundation on which to identify flaws in a claim thus promoting advancement in understanding and knowledge. Approaching from a truly skeptical point of view, this model allows one to realize, weigh, and correct/address an argument’s logical structures allowing one to verify the major premises of the argument or accurately discredit the argument. It also allows one to present supporting evidence needed to avoid logical fallacies in their own arguments, like those mentioned above.
Unfortunately, many people are passively accepting the claims of these theorists at face-value without conducting any research themselves into those claims. The narrator of this video, Chris White, clearly is not one of them. I am going to continue to monitor his presented information but, so far as I can tell, he is definitely a credible source of reliable information. One I will look forward to hearing more from in the future.
Principles of White Noise
In an effort to continue with my most recent blogs, which have mostly been focused on light and sound, or more precisely sound and electromagnetic waves, I wanted to introduce you to the principles of white noise. White noise is something often referred to by paranormal investigators and researchers but few actually understand the concept. It is my hope with this article to bring some of the fundamentals of white noise to light and explain how these concepts could apply to investigations and research.
In this case, “white” is an adjective that we use to describe a type of noise because of the way light works. I know you must be thinking, “Well, what does noise have to do with how light works?” That is certainly a viable question. White light is light that consists of all different frequencies (or colors) of light combined together. More to the point, it is White Electromagnetic Radiation. Most people know of the familiar prism with a rainbow separating light back into its component colors as it passes through. White noise can be thought of this way. It is a combination of all different frequencies of sound. It is a sound or series of constant sounds that contain every frequency typically within the range of human hearing. In the same way that the color white contains the whole spectrum of colors of light, similarly, white noise is created by using the entire spectrum of frequencies usually heard by the human ear.
Most people think white noise, which is also called white sound, is simply “noise” but this is not the case. It is actually a signal, a sound frequency.
Most of us are aware that there are varying “colors” of noise. The different “colors” of noise have significantly different properties. Audio signals, for instance, will sound different to humans whereas images will have a visibly different texture. This is why specific applications will require a noise of a different “color”. The different colors of noise include, but are not limited to:
– White;
– Pink;
– Brown;
– Blue;
– Violet;
– Grey;
– Red;
– Green; and
– Black.
The specific differences involve a lot of math and sound engineering knowledge. It can be very complicating so I will not go into those details right now.
White noise is used to “mask” other sounds. In technical terms, it is described as noise in which the amplitude is constant throughout the audible frequency range. The common misconception about white noise is that it is only associated with “static”. However, “white noise” is used as a general description for any type of constant unchanging background sound. Examples include, but are not limited to:
- Sounds of nature, such as rain, waves crashing on a shoreline, or crickets chirping;
- Sound of machinery, such as air conditioning units, a washing machine, or a fan; and
- Ambient soundscapes, such as the roar of an aircraft engine or a crackling fire.
This is the reason many people use some form of white noise when attempting to sleep.
It may sound counterproductive to add more noise when you are trying to sleep. However, it works because of the science behind white noise, which blends frequencies together resulting in a masking effect. For instance, some people will use white noise to drown out annoying external sounds, such as a dog’s incessant barking outside or people talking. It blends these sounds into the overall background noise. When this happens, your brain pays less attention and can begin to relax. When you add the noise, you are implementing what is called Sound Masking, which instead of actually drowning out the sounds, they become masked by the frequencies of the white noise.
The information above is the reason why I do not usually utilize white noise generators when my focus is primarily on sound-related evidence. Based on our experiments and research, doing so actually lessens your chances of capturing decent sound-related evidence – because the white noise tends to mask potential evidence. Unfortunately, we can only speculate. It is not yet clear whether or not this would apply to EVPs that are electromagnetic in nature rather than a result of a sound wave. If the EVP is not a sound wave, this concept may or may not apply but we do not have anything absolutely conclusive at this point. This is one aspect of EVP-research we have been exploring with our Gateway experiment, which is still ongoing. Presumably, we suspect it would apply regardless because of the science of waves in general. Any sound wave can be represented visually as an image depending on the equipment and software you are using to analyze.
The problem is this. If the frequency of the potential EVP comes through at an amplitude that is lower than the white noise (the background noise), it will become drowned out by the energy of the frequency with the higher amplitude – the white noise itself. In other words, the white noise will inadvertently mask the EVP. We have all seen this phenomenon at one point or another. We think we may hear something on a recording but it sounds too muffled and may be unintelligible. Or, the reverse will happen. You will hear the sound wave with your own ears but the equipment will appear to not have captured it. We believe this is one reason why. The background noise, the white noise of the area you are in, has higher amplitudes than the amplitude of the sound being recorded. assuming it is a sound wave, and becomes “masked”, meaning it is still there but you will not hear it on the recording itself upon play back. Modern digital recorders were specifically designed to limit the amount of background noise recorded resulting in a higher quality of recording. I believe this is one reason why you can capture better evidence on modern digital recorders than you can with older recorders.
Of course, this is just a basic hypothesis. We believe there are other factors that play perhaps an even more significant role, such as wave interference.
Theresa Byess, IRG Director and FAYPRA Partner
Theresa Byess is the Founder and Director of the Intranormalology Research Group (IRG), a division of the Fayetteville Paranormal Research Association (FAYPRA), a non-profit organization specializing in paranormal-related scientific research and development.
Born alongside the dusty bright lights of Las Vegas, Nevada, she has spent the better part of 17 years conducting her own personal research into faith and spirituality. What started off as a simple quest to better understand her Christian-based spirituality has since turned into a passion for teaching others all that she has learned over the years about faith, spirituality, and what has become known as the paranormal.
Many who get started in the field of the paranormal, all too often, have had remarkable claims of profound experiences leading them to want to better understand that which they had experienced. Theresa is one of the few who did not get started in paranormal research as a result of such a claim. To date, she has never conducted what many would refer to as an “official paranormal investigation”, nor does she claim to have had any experiences that she could justify, beyond a shadow of a doubt, was paranormal in nature. Still, she continues to read, learn, and share all that she can with others in an effort to promote a better understanding of that which we currently do not. While she does not boast to have experienced anything (yet) of a truly “paranormal” nature, she firmly believes there is ample evidence to support such claims.
Theresa is a widowed mother of one son and is currently engaged to former United States Army soldier, J.R. Ivy, who is also the Co-Founder of IRG. They have shared a close friendship for more than 15 years, having only become a couple in 2011. They are currently living in the town of Fayetteville in North Carolina. Both are currently working toward Bachelor’s degrees in different fields.
Ms. Byess and the members of FAYPRA started an almost destined close partnership beginning about 3 years ago, about the same time she moved from her hometown in Georgia to North Carolina, when she was seeking assistance with her group’s newest project called The Gateway EVP Project, a two-tiered study and experiment based on hypotheses surrounding EVPs. Since then, the relationship between IRG and FAYPRA has blossomed incredibly, feeling more like family than a “paranormal partnership”.
Today, she works in close partnership with FAYPRA in all facets of operations, including client investigations, spiritual advice, and educating the public regarding the various topics surrounding the field of the paranormal with her primary role acting in a research and development capacity.
Theresa considers herself a “student of Life”, constantly seeking knowledge, truth, and understanding of the world around her. Motivated by the strong desire to learn and help others. Some consider her empathic, possessing an uncanny ability to sense the emotions, feelings, and thoughts of others.
In addition, Ms. Byess also considers herself a true skeptic, wherein she doubts but does not deny. Believing emphatically that anything is possible but perhaps all too often exaggerated from a lack of understanding or misinterpretation. However, she remains nonjudgmental and unbiased when others seek advice, or in providing advice, listening with a tentative ear and an open-heart and mind. To her, knowledge and understanding are the powers of the heart, mind, body, and soul. Power that can be used to obtain a deeper sense of spiritual enlightenment, freedom, and peace.
Having never claimed to experience anything profound of a paranormal nature but also blessed with what she commonly refers to “divinely-granted knowledge and understanding”, she wholeheartedly believes she is in the perfect position to act in an unbiased capacity regarding the paranormal and its research.
Her goal is not to “prove” or “disprove” that existence of the paranormal as a whole. Her goal, and indeed the goal of IRG, is to scientifically explore the plausibility of the various aspects, hypotheses, and theories of the paranormal.
PERPS: The Micro and Macro View
The Micro and Macro View
11/12/2014
By Theresa Byess
To understand PERPS, a little background is in order. This is the first note in a series that will attempt to explain PERPS and IRG processes in the hopes we will develop a better understanding of the phenomena we are attempting to study.
IRG has spent many years researching various disciplines and theories, some pertaining to the paranormal and others did not. We had concerns, much like we did when Gateway was born, that fundamentally inaccurate assumptions were based on simple observations, like the ancient belief that the earth was flat, the sun, moon, planets, and stars rotated around us. There were many theories floating around out there with no real empirical data to back it up either way. These theories are connected to everything from weather, climate conditions, geology, geography, and many more. IRG wanted to explore which of these theories, on both the proponents’ and opponents’ side of the debate, were at least plausible, if at all, and based solely on empirical scientific research data.
Case in point. There is a theory that population density will have an effect on the number of reports of paranormal claims. I, like many, assumed the higher the population of an area, the higher the number of reports of paranormal occurrences there will be. It seemed like a reasonable assumption on the surface but we wanted to find out if it was accurate. Exactly how might the population density of an area truly affect the number of claims of paranormal activity?
With PERPS, one of the hypotheses we are attempting to explore is whether or not cities with higher populations will generate a higher number of claims of spiritually-related paranormal activity. On the skeptic’s side of the debate, the belief is it is some form of mass delusion or hysteria related to an overactive imagination or some other psychological, mental, emotional, and/or physical illness, disease, and/or disability, hallucinations, misidentification, pareidolia, blatant falsification or exaggeration, an ignorance of natural processes, or some other “naturalistic explanation”. There are opinions that such experiences are related primarily to factors such as culture bias, spiritual beliefs, education level, disease/illness, population density, poverty level, and a myriad of other factors.
On the proponent’s side, higher numbers of reports are merely a justification of the truth and validity of such claims. If more people are witness to the same events, chances are, there is a higher likelihood that it is based on truth. There is obviously something to the reports. If 20 people, for example, pick the same man out of line-up, chances are the jury is going to believe those 20 people. On the other hand, the defense may present statics of the seeming unreliability of eyewitness testimony.
Both of these assumptions are reasonably viable arguments. However, as a whole, they are both lacking when it comes to being open to the possibility that one or both could be wrong entirely or to varying degrees. In general, many are utterly unwilling to waiver in their beliefs or personal opinions despite potential evidence to the contrary. I refer to these types of people as delusional deniers and irrational believers. Marcello Truzzi coined the term “pseudoskeptic s” to describe this group of delusional deniers, who denied emphatically in the face of evidence to the contrary. There are pseudo-skeptics but there are also pseudo-believers. Unfortunately, this field has been inundated with both types and, in my own personal opinion, this severely hinders progress in this field.
With that being said, IRG’s research is based on the happy medium that exists between these two groups of the far right and the far left – an area we refer to as the Neutral Zone (NZ). Our purpose is to remain as unbiased and objective as possible. Our hope is to explore the questions many have in this field to see if there are truly any plausible explanations regardless of which side of the fence they fall on. Obviously, there is “something” going on that demands serious scientific inquiry and study, an assumption based on the observation of strange happenings existing over hundreds of thousands of years. Like so many things, we hypothesize, these occurrences are going to be based on a wide range of factors and there is not going to be a “one size fits all” explanation but a series of better and better approximations. There are only going to be general factors from which to base observations. There are going to be key indicators that certain types of spiritually-related phenomena, for example, are related more to population density while there are going to be other key-indicators which “defy rational explanation” to a degree in which we can currently explain. There will be profiles that can be created and applied to different situations.
This level of thinking coincides with IRG’s belief that these types of phenomena are best studied in terms of micro and macro. We have previously published information on this topic in another note, which can be found here:
To put it simply, “macro” means to study something on a larger scale which cannot typically be observed while “micro” is on a much smaller scale and can be observed and identified. In other words, micro studies individual topics while macro studies whole or general topics. Like so many disciplines studied in this manner, micro- and macroeconomics, micro- and macroevolution, and micro- and macrophotography, they must be studied separately.
Why is this essential to studying this type of phenomena? Simply knowing (or at least assuming we know) that interests and other factors are central to any decision-making process is not sufficient for predicting how people will react to perceived spiritually-related phenomena. A framework must be developed that will allow us to analyze the phenomena and solutions to each paranormal question. This framework will give us the power to reach informed conclusions and decisions about what is happening with these types of experiences.
To better understand this concept, we must first define that which we are attempting to study. We call this Spiritualology, a new term coined by IRG, – the study of perceived anomalous phenomena related to the idea of spirits, ghosts, and malevolent/benign beings by the application of various scientific disciplines, such as various social sciences. The goal is not yet to study the phenomena itself. It’s a fact-finding mission designed to study people first and the phenomena later based on the results of those studies.
Spiritualnomics, according to IRG, is one sub-category of Spiritualology and seeks explanations of events and occurrences and, as such, is a part of social science. All currently accepted social sciences are meant to analyze human behavior and decision-making, as opposed to the physical sciences, which generally analyze topics such as atoms, subatomic particles, and other nonhuman phenomena.
Spiritualnomics is further divided into two types of analysis: microspiritualnomics (MISN) and macrospiritualnomics (MASN).
MISN is the part of the Spiritualnomics analysis that studies the individual aspects of Spiritualology. It is comparable to looking through a microscope to focus on the smaller parts of the new scientific discipline. It is concerned, for example, with the effects of the population density of an area on the number of cases reported relative to other factors, such as cultural background or spiritual beliefs.
MASN, dealing with aggregates, or the total amounts or quantities, is the part of Spiritualnomics analysis that studies the phenomena as a whole, dealing with the nationwide phenomena. Issues such as how the weather and/or climate in a particular region, the rate of nationwide unemployment, and the general culture affects the number of cases reported would be studied under this discipline.
Microspiritualnomics is the basis for macrospiritualnomics because even though macrospiritualnomic analysis aggregates are being studied and examined, those aggregates are the result of information produced by individuals on the micro scale.
Following the logics of micro- and macroeconomics, we can apply certain principles in this case as well. One such assumption is called the Rationality Assumption, which is an assumption of economics that states “we assume that individuals do not intentionally make decisions that would leave them worse off”. The distinction here is that economics is not meant to explore the “why” factor of people’s decision making. It is meant to explore the “what” factor – what do people actually do? The “why” factor is a matter for psychology. Similarly, MISN and MASN do not attempt to answer the “why” factor. Instead, it is an attempt to analyze the “what” and “how” factors. What affect does the population density of an area or the nation (nationally or internationally) have, if any, on the number of cases reported? How does the weather/climate of an area or region appear to affect the number of cases reported? How does the demographics, including but not limited to age, race, education level, crime rates, and unemployment of an area or region affect the number of cases reported?
Spiritualnomics is a social science, albeit unofficially, that employs the same types of methods used in other sciences, such as physics, chemistry, and biology. It uses models or theories. Models are simplified representations of the world used to help us understand, explain, and predict phenomena. Like so many social sciences, Spiritualnomics makes little use of laboratory experiments in which changes in variables are studied under controlled conditions. Instead, models and theories are tested by examining what has already happened. It is important to note also that no model of any science is complete in the sense that it completely details every existing interrelationship. Models are by their definition abstractions from reality making it conceptually impossible to generate a perfectly complete realistic model.
Every model or theory must be based on a set of assumptions, defined as the set of circumstances in which the model in most likely applicable. If the goal is to explain observed behavior, the simplicity or complexity of the model being used would be irrelevant. However, if a simple model can explain observed behavior in repeated patterns or settings as well as a complex one, a simple model would more than likely have more value and be easier to use.
Like many other sciences, Spiritualnomics employs the ceteris peribus assumption, which means other things being equal or constant. Consider an example from the world of economics. We know one of the most important factors involved with how much of a product a particular family will purchase is based on the price of that product relative to other products. We understand other factors influence this decision-making process, such as income and taste. Regardless of those other factors, they are held constant when the relationship between changes in prices and changes in the quantity of the purchased product are examined. While this concept of “other things being equal” is still being explored and refined as it pertains to MISN and MASN, it will ultimately prove to be key in studying Spiritualnomics. Like economics, it would be impossible to isolate the effects of changes in one variable on another if we always have to worry about the vast number of variables that may also enter into the analysis.
Another economic precept that applies to Spritualnomics is that of Opportunity Cost (OC). The concept is based on the idea that when you do something, you lose something else. You lose the opportunity to engage in the next highest-valued alternative. The cost of the choice is what is lost. Consider this example. In March of 2013, IRG published a note titled Power and Energy. In this note, we hypothesized that investigators and researchers may inadvertently be harming their chances of obtaining more substantial evidence with the introduction of multiple pieces of equipment and other experimental apparatuses or energy sources. The reason is, in physics, each addition of energy creates a change in the state of matter (or more precisely in electromagnetic radiation). As a result, multiple sources of energy in a location may cause erratic changes in the Spiritual Energy Form (SEF) a term coined by IRG to describe the various types of observed spiritually-related phenomena, such as apparitions and shadows. The result is what we call Diminished Potential Evidence (DPE). This diminished potential evidence is the opportunity cost of using multiple scientific instruments or other energy sources, ceteris peribus. This, of course, is merely at hypothesis at this point and is one we are attempting to test in the near future when we hope to understand how one perceived spiritual form changes to another.
We are currently exploring how other economics principles apply, if at all, as well and predict many of the general laws of economics will prove useful with a slightly different applications, such as the Production Possibility Curve and the laws of supply and demand. The implications of this are staggering. Why? With this information, we will finally be able to apply mathematical equations and graphs to the research. Graphs are simply visual representations of relationships between two variables, in this case the relationship between people and perceived spiritually-related phenomena. It could potentially provide the empirical data needed in support of or against certain hypotheses surrounding it. This will allow us to move one step further to understanding these phenomena.