1.3: Scientific Theories - Biology

1.3: Scientific Theories - Biology

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Theory vs. theory. Is a scientific theory different from the everyday use of the word theory?

A scientific theory is accepted as a scientific truth, supported by evidence collected by many scientists. The theory of evolution by natural selection is a classic scientific theory.

Scientific Theories

With repeated testing, some hypotheses may eventually become scientific theories. Keep in mind, a hypothesis is a possible answer to a scientific question. A scientific theory is a broad explanation for events that is widely accepted as true. To become a theory, a hypothesis must be tested over and over again, and it must be supported by a great deal of evidence.

People commonly use the word theory to describe a guess about how or why something happens. For example, you might say, “I think a woodchuck dug this hole in the ground, but it’s just a theory.” Using the word theory in this way is different from the way it is used in science. A scientific theory is more like a fact than a guess because it is so well-supported. There are several well-known theories in biology, including the theory of evolution, cell theory, and germ theory.

Two videos explaining scientific theories can be seen at and

As you view Know the Difference (Between Hypothesis and Theory), focus on these concepts:

  1. the controversy surrounding the words ‘‘hypothesis’’ and ‘‘theory’’,
  2. the scientific use of the words ‘‘hypothesis’’ and ‘‘theory’’,
  3. the criteria for a ‘‘hypothesis,’’
  4. the National Academy of Sciences definition of ‘‘theory’’,
  5. the meaning of the statement, ‘‘theories are the bedrock of our understanding of nature’’.

The Theory of Evolution

The theory of evolution by natural selection is a scientific theory. Evolution is a change in the characteristics of living things over time. Evolution occurs by a process called natural selection. In natural selection, some living things produce more offspring than others, so they pass more genes to the next generation than others do. Over many generations, this can lead to major changes in the characteristics of living things. The theory of evolution by natural selection explains how living things are changing today and how modern living things have descended from ancient life forms that no longer exist on Earth. No evidence has been identified that proves this theory is incorrect. More on the theory of evolution will be presented in additional concepts.

The Cell Theory

The cell theory is another important scientific theory of biology. According to the cell theory, the cell is the smallest unit of structure and function of all living organisms, all living organisms are made up of at least one cell, and living cells always come from other living cells. Once again, no evidence has been identified that proves this theory is incorrect. More on the cell theory will be presented in additional concepts.

The Germ Theory

The germ theory of disease, also called the pathogenic theory of medicine, is a scientific theory that proposes that microorganisms are the cause of many diseases. Like the other scientific theories, lots of evidence has been identified that supports this theory, and no evidence has been identified that proves the theory is incorrect.


  • With repeated testing, some hypotheses may eventually become scientific theories. A scientific theory is a broad explanation for events that is widely accepted as true.
  • Evolution is a change species over time. Evolution occurs by natural selection.
  • The cell theory states that all living things are made up of cells, and living cells always come from other living cells.
  • The germ theory proposes that microorganisms are the cause of many diseases.

Explore More

Use these resources to answer the questions that follow.

Explore More I

  • Darwinian Evolution - Science and Theory at Non-Majors Biology:
  1. How is the word ‘‘theory’’ used in common language?
  2. How is the word ‘‘theory’’ used in science?
  3. Provide a detailed definition for a ‘‘scientific theory’’.

Explore More II

  • Concepts and Methods in Biology -Theories and Laws at Non-Majors Biology:
  1. What is a scientific law?
  2. What is a scientific theory?
  3. Give two examples of scientific theories.
  4. Can a scientific theory become a law? Why or why not?


  1. Contrast how the term theory is used in science and in everyday language.
  2. Explain how a hypothesis could become a theory.
  3. Describe the evidence that proves the cell theory is incorrect.

1.3: Slip Line Field Theory

  • Contributed by Dissemination of IT for the Promotion of Materials Science (DoITPoMS)
  • Department of Materials Science and Metallurgy at University of Cambridge

This approach is used to model plastic deformation in plane strain only for a solid that can be represented as a rigid-plastic body. Elasticity is not included and the loading has to be quasi-static. In terms of applications, the approach now has been largely superseded by finite element modelling, as this is not constrained in the same way and for which there are now many commercial packages designed for complex loading (including static and dynamic forces plus temperature variations). Nonetheless, slip line field theory can provide analytical solutions to a number of metal forming processes, and utilizes plots showing the directions of maximum shear stress in a rigid-plastic body which is deforming plastically in plane strain. These plots show anticipated patterns of plastic deformation from which the resulting stress and strain fields can be estimated.

The earlier analysis of plane strain plasticity in a simple case of uniaxial compression established the basis of slip line field theory, which enables the directions of plastic flow to be mapped out in plane strain plasticity problems.

There will always be two perpendicular directions of maximum shear stress in a plane. These generate two orthogonal families of slip lines called &alpha-lines and &beta-lines. (Labelling convention for &alpha and &beta lines.)

Experimentally, these lines can be seen in a realistic plastic deformation situation, e.g.

  • Polyvinyl chloride (PVC) viewed between crossed polars
  • Nitrogen-containing steels can be etched using Fry&rsquos reagent to reveal regions of plastic flow in samples such as notched bars and thick walled cylinders.
  • Under dull red heat in forging, we see a distinct red cross caused by dissipation of mechanical energy on slip planes.

To develop slip line field theory to more general plane strain conditions, we need to recognize that the stress can vary from point to point.

Therefore, p can vary, but k is a material constant. As a result of this, directions of maximum shear stress and the directions of principal stresses can vary along a slip line.

Hencky relations

These equations arise from consideration of the equilibrium equations in plane strain.

Click here for the derivation.

We can apply these relations to the classic problem of indentation of a material by a flat punch. This is important in hardness testing, in foundations of buildings and in forging.

In more general cases, slip lines do not intersect external boundaries at 45° because of friction. In the extreme case, sticking friction occurs (a perfectly rough surface) and slip lines are at 90° to the surface.

Slip-line field for compression between a pair of rough parallel platens [1]

The slip line patterns above are very useful for analyzing plane strain deformation in a rigid-plastic isotropic solid. Arrival at this slip-line pattern is rather complex. They are either derived from model experiments in which the slip-line field is apparent or they are postulated from experience of problems with similar geometry.

For a slip line to be a valid solution (but not necessarily a unique solution) the stress distribution throughout the whole body, not just in the plastic region, must not violate stress equilibrium nor must it violate the yield criterion outside the slip-line field.

The resultant velocity field must also be evaluated to ensure that strain compatibility is satisfied and matter is conserved.

These are stringent conditions and mean that obtaining a slip-line field solution is often not simple.

Instead, it is useful to take a more simple approach to analysing deformation processing operations where one or another of the stringent conditions are relaxed to give useful approximate solutions for part of the analysis.


This paper describes the work on opportunity and crime since it was put on the agenda some 35 years ago in the Home Office publication, Crime as Opportunity (Mayhew et al. 1976). As described by Hough and Mayhew (2011, this work began in the early 1970s, when the two of them and I were employed in the Home Office Research Unit. In this paper, I will focus on a question that pre-occupied us then and later: Is opportunity a cause of crime? The question was unavoidable because if opportunity is a cause, then reducing it could be expected to reduce crime without displacing it if opportunity merely determined when and where crime occurred, but did not cause it, then the expected result of reducing opportunity would simply be to displace it.

Despite its importance, no attempt was made to resolve this question in Crime as Opportunity, but twenty-two years later Felson and I felt able to claim that opportunity is a cause of crime (see Opportunity Makes the Thief, Felson and Clarke, 1998). This change needs some explanation and this paper addresses four pertinent questions a :

Why did Crime as Opportunity published in 1976 avoid making the claim that opportunity is cause of crime?

What had changed by the time that Opportunity makes the Thief was published in 1998?

What has happened since claiming that opportunity causes crime?

What has been achieved by the work on opportunity and crime – i.e. “So what”?

Why did Crime as Opportunity(1976) not claim that opportunity is a cause of crime?

The short answer to this question is that the reason was a mixture of conceptual confusion and timidity. Regarding the latter, to have claimed that opportunity is a cause of crime would have gone quite contrary to the criminology of the day, which was heavily dispositional (it still is). This dispositional bias can be traced back to Sutherland, a sociologist, often described as the father of criminology. In his introduction to Principles of Criminology (3rd ed. Sutherland (1947)), he expressed the opinion:

“The problem in criminology is to explain the criminality of behavior, not the behavior, as such” (page 4)….The situation operates in many ways, of which perhaps the least important is the provision of an opportunity for a criminal act” (page 5).

This seems to have been broadly accepted by criminologists, most of whom like Sutherland were also sociologists, but it flouted a fundamental principle of psychology: behavior is the product of the interaction between environment and organism. In criminological terms, this can be expressed as “crime is the result of an interaction between a motivated offender and a criminal opportunity”. The importance of taking this principle seriously b had been brought home to me by the results of a study of absconding from training schools (Clarke and Martin 1971). This found that absconding was better explained by the environments and regimes of the training school, which facilitated and provoked absconding, than by absconders’ personalities and backgrounds. The study therefore showed not just that situational opportunities and provocations to abscond were important explanatory factors, but they appeared to be more important than dispositional factors.

Before proceeding it should be noted that Sutherland in fact recanted his views about the role in crime played by opportunity and situation in a little-known unpublished paper, “The Swansong of Differential Association” (his theory of crime causation), which was later issued in a collection of his writings (Cohen et al. 1956). c The paper included the following passage:

“One factor in criminal behavior that is at least partially extraneous to differential association is opportunity. Criminal behavior is partially a function of opportunities to commit specific classes of crime, such as embezzlement, bank burglary or illicit heterosexual intercourse. Opportunities to commit crimes of these classes are partially a function of physical factors and of cultures which are neutral as to crime. Consequently, criminal behavior is not caused entirely by association with criminal and anti-criminal patterns, and differential association is not a sufficient cause of criminal behavior.” (Sutherland, 1956:31, quoted by Merton, 1995: 38).

Unfortunately, we were not aware of this retraction when we wrote Crime as Opportunity. In addition, we found it difficult to muster unambiguous evidence supporting our views about the role of opportunity in crime. d The absconding work mentioned above had gone unnoticed by criminology and, because it dealt with institutional misbehavior, would hardly be convincing as the basis for generalizing about all crime. The same could be said of Mischel’s (1968) psychological studies and the Studies in Deceit published by Hartshorne and May (1928) work that we cited. The former studies established that behavior could not be reliably predicted from personality test scores because it was heavily determined by situational factors, while the latter work showed that whether children cheated on tests or behaved dishonestly depended mostly on situational variables such as the risks of being found out. In both cases, there was a considerable stretch between our thesis on opportunity and Mischel’s findings (too abstract) and those of Hartshorne and May (relating to minor juvenile transgressions in artificial settings). e

Finally, the two empirical studies included in Crime as Opportunity provided only mixed support for the causal role of opportunity. The first study showed that double-deck buses suffered more vandalism on the front upper deck and other parts of the bus that could not be easily supervised by the bus crew. Strictly speaking, this showed only that lack of supervision determined the vandal’s choice of where to target. The second study examined the effect on car theft of the compulsory fitting of steering column locks in England and Germany. In England, only new cars were fitted with the locks with the result, consistent with dispositional theory, that thefts were displaced to older cars without the locks. In Germany, all cars, new and old were fitted with the locks. This ruled out displacement with the result, consistent with situational theory, of an immediate and sustained decline in theft of cars. To us these results were not in conflict, but to our critics they were seen as providing support for either side of the argument about the role of opportunity in crime.

Apart from these strands of evidence, two books published in the early 1970s gave us some encouragement – Jeffery (1971) Crime Prevention through Environmental Design, which argued that criminologists had neglected the biological and physical determinants of crime (including opportunity), and Oscar Newman’s (1972) Defensible Space, which claimed that the high crime levels of public housing projects in the United States were due as much to the criminogenic designs of the projects as to the nature of the residents. Unfortunately, neither book proved helpful to the argument that opportunity was a neglected cause of crime because they were both given short shrift by criminological reviewers – Jeffery’s because of his views about the biological causation of crime and Newman’s because of its anthropomorphic conception of human “territoriality” and shortcomings in its statistical analyses (Newman was an architect, not a social researcher).

Thus, the lack of clear supporting evidence is one reason why Crime as Opportunity did not make the case that opportunity is a cause of crime, but another was the widespread belief at the time that social science could not establish causal relationships. This led us – me in particular – into a confusing exploration of the notion of cause in social science. I was impressed by Barbara Wootton’s (1950) argument about the infinitely regressive nature of cause:

The search for causal connections between associated phenomena simply resolves itself into a long process of “explaining” one association in terms of another. If a person becomes ill with what are known as diabetic symptoms, we measure the sugar-content of his blood. If this is higher than that found in people not exhibiting such symptoms, we say that the high sugar content is the cause of the illness. . this in turn is said to be “explained” by a failure of the pancreas to function normally…. And so on with one law of association following another.” (page 18).

This argument, made by a greatly respected social scientist, f led me to think that it would be impossible to claim that opportunity was a cause of crime without becoming entangled in endless philosophical arguments, and I turned instead to the notion of necessary and sufficient conditions. Once again, I spent sleepless hours trying to decide whether opportunity was a necessary or a sufficient cause of crime, becoming confused about the meanings of each, and then deciding that sorting one from the other would still not allow us to claim that opportunity was a cause of crime.

A further deterrent was that to claim that opportunity caused crime would require us to define opportunity. Here again the difficulties seemed overwhelming. Any definition would have to take account of the fact that opportunities are highly crime specific – those that “cause” bank robbery are quite different from those that cause rape. In addition, for any crime, opportunities occur at several levels of aggregation. To take residential burglary as an example, a macro level, societal-level cause might be that many homes are left unguarded in the day because most people now work away from home (cf. Cohen and Felson 1979). A meso-level, neighborhood cause could be that many homes in poor public housing estates once used coin-fed fuel meters which offered tempting targets for burglars (as found in Kirkholt, Pease 1991). A micro-level level cause, determining the choices made by a burglar, could be a poorly secured door. Finally, at any particular level of explanation, for any specific crime, the opportunities are vast. For example micro-level opportunities include not just an unlocked door, but an open window, an isolated location, bushes that provide cover, signs of wealth, etc.

A final reason for avoiding the controversial claim that opportunity was a cause of crime was that we were not university scientists, with the authority implied by that status, but government researchers. Worse we were what came to be called administrative criminologists, uncritically serving our civil service masters, though as it proved, many of these masters were as skeptical about situational prevention as most academics of the time proved to be.

What had changed by the time that Opportunity makes the Thief was published in 1998?

Twenty-two years after the publication of Crime as Opportunity, Felson and I finally asserted in Opportunity Makes the Thief (Felson and Clarke, 1998) that opportunity is a cause of crime. What took so long? The explanation begins with the reception given to Crime as Opportunity, which was generally dismissed (more in conversation than in print) as irrelevant, simplistic and atheoretical. The first review, which concludes with the following sentences, gives a flavor of this reaction:

“Now that the criminological kitchen is becoming so hot, it is as if the (Home Office) Research Unit is looking for a nice, quiet, simple, and nonpolitical corner. It is a touching, if unworldly idea – like playing with one's toes. But it won’t catch on.” (Beeson, 1976: 20).

The publication was also criticized for denying the influence of the “root causes” of crime – maternal deprivation, subculture, relative deprivation and anomie – that criminologists had so patiently documented over the years. The fact that our critics felt little inhibition in claiming that their theories were causal in nature meant that our caution had been wasted. This strengthened our resolve to be more assertive about the causal role of opportunity, but this only became possible much later as a consequence of the following developments:

Quite soon after the publication of Crime as Opportunity, we found we were not alone in our views: Cohen and Felson (1979) and the Brantingham and Brantingham (1981) working respectively in the United States and Canada, had persuasively argued that explaining crime involved not merely explaining criminal dispositions but also explaining the role of immediate circumstance and situations.

In order to underpin situational prevention, Cornish and I developed the rational choice perspective (Clarke and Cornish 1985 Cornish and Clarke 1986). While this was also heavily criticized (cf. Clarke (In Press)), it should have been clear that the theory underlying Crime as Opportunity was far from simple-minded.

Some helpful theoretical work on causation began to appear in the years after the publication of Crime as Opportunity. Felson cut through our prevarications about free-will and determinism in the original formulation of the rational choice perspective (Clarke and Cornish 1985) by declaring that: “People make choices, but they cannot choose the choices available to them” (Felson 1986: 119). Ekblom (1994) made a useful distinction between near and far causes of crime, with the rider that, as a near cause, opportunity had a more powerful and immediate effect on crime than criminal dispositions formed many years before. Distinguishing between near and far causes provided a solution to Barbara Wootton’s problem of infinitely regressive causes. Tilley reinforced Ekblom’s position by insisting that the mechanisms through which a cause was supposed to exert its effect should always be clearly specified (cf. Tilley and Laycock 2002). This again favored opportunities over dispositions because the causal mechanisms of situation and opportunities were so much shorter and easier to chart. Following a different line of argument, but one that endorsed the practical value of focusing on reducing crime opportunities, James Q Wilson (1975) stated that if criminologists persisted in framing theories in terms of “causes” that could not be changed (e.g. maternal deprivation or relative deprivation), they would be consigned to policy irrelevance.

A number of studies were published during this period providing strong evidence about the causal role of opportunity, several of which I undertook with Pat Mayhew. First, we showed in Mayhew et al. (1989) that the introduction of helmet laws in various countries had a dramatic effect in reducing theft of motorcycles, apparently because motorcycle thefts were frequently unplanned which meant that opportunistic thieves would be immediately noticed when riding past without a helmet. Table 1 shows data from Germany where the laws were progressively enforced, having been first brought into effect in 1980. The table shows that motor cycle thefts were greatly reduced with little if any consistent evidence of displacement to car or bicycle thefts.

Second, we showed (Clarke and Mayhew 1988), that the 35% decline in numbers of suicides occurring in England and Wales between 1958 and 1977 was caused by a progressive reduction in the poisonous carbon monoxide content of domestic gas. (See Table 2). These reductions resulted from cost-saving measures and were brought about, first, by a change in manufacture from coal-based to oil-based gas and then the substitution of natural gas from the North Sea which contained no carbon monoxide. Suicides by domestic gas which accounted for 50% of the deaths at the beginning of the period were virtually eliminated by the end of the period and their decline precisely tracked, year-by-year, the decline in the carbon monoxide content of domestic gas – strong evidence of a causal relationship. There was little displacement to other methods of suicide when domestic gas was detoxified, presumably because these methods were more difficult, painful or distasteful .

Third, in the same publication we sought to explain why rates of homicide were eight times greater in the United States than in England and Wales during the mid-1980s, when rates for most other crimes differed little between the two countries. This difference was the result of a much higher rate of gun homicides in the United States, particularly handgun homicides, which in turn was due to much higher levels of gun ownership – a situational variable – in that country than in England and Wales. g

Fourth, using data from the British Crime Survey, we showed (Clarke and Mayhew, 1998) that risks of car theft in public car parks were eleven times greater than when cars were parked for the same length of time in the owner’s driveway and 225 times greater than when parked in the owner’s garage. These large differences in risk seemed to us to be strong evidence that the low risk of stealing cars from parking lots caused theft and did not merely facilitate it.

The motorcycle helmet study and the suicide study were just two of the many studies that appeared during the period showing that the risks of displacement had been exaggerated. This was confirmed in a review of these studies by Eck (1993) and by Hesseling (1994) who reported that no displacement was found in many studies and that, when found, the number of crimes displaced were many fewer than those prevented. Hesseling also reported the encouraging finding of his review that “diffusion of benefits” (Clarke and Weisburd 1994), where crime reductions are found beyond the intended reach of opportunity-reduction measures, was a common result of situational prevention. h

Apart from the specific pieces of research mentioned above, there was remarkable growth in scholarly activity concerned with opportunity-reduction. Under Gloria Laycock’s leadership, the Crime Prevention Unit in the Home Office and subsequently the Police Research Group published some 150 relevant reports. i In the United States, Crime Prevention Studies, a book series devoted to situational crime prevention, had issued 8 volumes by 1998. The first edition of Felson (1994)/Crime and Everyday Life, which has sold many thousands of copies, and which has done more than any other publication to disseminate findings about the role of opportunity in crime, was published in the United States. Finally, the establishment in 1992 of the annual meetings of the Environmental Crime and Crime Analysis (ECCA) group has served to build a world-wide network of scholars interested in the situational determinants of crime.

The developments listed above had transformed the evidential base for asserting the powerful role of opportunity in crime. Consequently, Felson and I readily agreed to Gloria Laycock’s request made in 1998 to write a paper arguing that opportunity was a cause of crime. If published in the Police Research Series as intended, she believed the paper would materially help in persuading police and local authorities to make more use of situational prevention. Without resolving the philosophical issues around the concept of cause, we simply decided to assert in Opportunity Makes the Thief that opportunity is an important cause of crime

After Opportunity Makes the Thief

Much has happened after publication of Opportunity Makes the Thief to reinforce these claims about the role of opportunity in crime.

Valuable studies have continued to accumulate on displacement and diffusion of benefits, including one showing that the installation of 3178 lockable gates to restrict access to alleys behind row houses in Liverpool saved £1.86 in costs of residential burglary for every £ spent on the gates during the first year after installation. There was little apparent displacement but clear evidence of a diffusion of benefits to nearby streets without the gates (Bowers et al. 2004) and these benefits were sustained in later years (Armitage and Smithson 2007). Another strong study, undertaken in the United States, showed that system modifications made by cell phone companies eliminated a problem of cell phone cloning that at its height in 1996 had cost as much as $800 million in one year. There was no evidence of displacement to the second most common form of cell phone fraud, acquiring cell phone service through the presentation of false ID (Clarke et al. 2001). These studies were included in an important review published in Criminology, the discipline’s leading journal, of 102 situational prevention studies. This review found that: (1) no displacement was found in 68 of the studies (2) when found, displacement was never complete and (3) diffusion of benefits occurred in 39 of the studies (Guerette and Bowers 2009).

The period has also seen useful additions to the theory underlying situational crime prevention, perhaps the most important of which was Wortley’s (1997, 2001) argument that situations are not just passive providers of opportunities for crime, but they can also precipitate crime j . His careful enumeration of the many ways that this can occur led to an expansion of the frequently cited classification of situational prevention techniques which, thanks to his work, now number 25 (Cornish and Clarke 2003). Felson’s concept of guardianship has stimulated considerable research effort, notably in terms of Eck’s (2003) “double crime triangle”, the inspiration for the Center for Problem-oriented Policing’s logo. k Cornish’s (1994) introduction of the concept of crime scripts, that assists in laying out the various stages of a crime (from planning, commission and escape, and through even later stages of covering tracks and disposing of stolen goods) has stimulated a considerable volume of work, particularly on complex crimes such as internet child pornography (Wortley and Smallbone 2006), suicide bombings (Clarke and Newman 2006), and organized crimes of various kinds (Tremblay et al. 2001 Bullock et al. 2010 Chiu et al. 2011). This work has shown that situational crime prevention is applicable not just to “opportunistic” street crimes, but potentially to every form of crime, however complex, and however determined the offenders.

This empirical and theoretical progress has not gone unremarked outside the small circle of environmental criminologists directly involved in this work. Garland (2001) argued in a widely read book on criminology and public policy that opportunity theory (or what he called the “criminologies of everyday life”) had been more influential in recent decades than any other criminological approach, though this was disputed by Young (2003), the originator of the term “administrative criminology”, in an unusually long review essay (Young 2003). Judged by various government White Papers and other reports, the latest of which is The Governments Approach to Crime Prevention (Home Affairs Committee, 2010), situational crime prevention has directly contributed to crime policy thinking in the United Kingdom, if not also in some other European countries. To date, however, situational prevention has made relatively little impression on American criminology, perhaps because American criminology is focused even more strongly on dispositional theory than the criminology of other countries. But even this might change: Cullen, a former president of the American Society of Criminology and redoubtable dispositional theorist, made the following remarks on recently being nominated for the Society’s premier award: l “If you have not heard of ECCA, it means that you likely know nothing about crime – but you should.” Cullen (2011): 314.

ECCA’s growing influence is one example of the growth in the institutional strength of environmental criminology. ECCA members have promoted teaching of environmental criminology in their universities, including Huddersfield and Loughborough in the U.K, Twente University and the NSCR in Holland, Griffiths University in Queensland, Simon Fraser University in Canada, and in the United States, Rutgers, University of Cincinnati, Temple University, and, finally, Texas State at San Marcos, where Felson has recently joined Rossmo and other environmental criminologists.

Another important example of growing institutional strength is the rise to prominence of the Popcenter, the web-site ( of the Center for Problem-oriented Policing which is funded by the U.S. Department of Justice. In the ten years of its existence Popcenter has become an indispensable resource for those involved in problem-oriented policing and situational crime prevention. More than 80,000 files per month are downloaded from the website, equally by students and police officers.

The jewel in the crown of the emerging empire in crime science, for me, however, is the Jill Dando Institute of Crime Science, founded at University College London, through the combined efforts of Nick Ross (the former presenter of Crimewatch, the BBC television program) and Ken Pease. From its modest beginnings some ten years ago when it consisted of little more than its director, Gloria Laycock, a trust fund, a secretary and box of books it has grown to a formidable Institute within UCL’s faculty of engineering, with more than thirty staff members and Ph.D. students.

What has been achieved by the work on opportunity and crime – that is to say, “So what”?

As a result of the developments listed above, it is possible to make much bolder claims for the role of opportunity in crime, as follows:

Criminally-disposed people will commit more crimes if they encounter more criminal opportunities.

Regularly encountering such opportunities can lead these people to seek even more opportunities.

At the point of deciding to commit a crime, opportunity plays a more important role that dispositions.

The existence of easy opportunities for crime enables some people to lead a life of crime.

People without pre-existing dispositions can be drawn into criminal behaviour by a proliferation of criminal opportunities, and generally law-abiding people can be drawn into committing specific forms of crime if they regularly encounter easy opportunities for these crimes, especially in their occupations.

The more opportunities for crime that exist, the more crime there will be.

Reducing opportunities for specific forms of crime will reduce the overall amount of crime.

While consistent with available evidence, it must be acknowledged that the evidence supporting these claims needs to be strengthened. However, the final and perhaps most important claim is supported by Van Dijk et al’s. (2012) edited volume whose contributors collectively argue that improved security is the best explanation for the recent declines in crime in most Western countries. If their thesis holds it will have a profound influence on criminological theory and crime policy. For the remaining claims, doubters might consider the following:

“Suppose all situational controls were abandoned: no locks, no custom controls, cash left for parking in an open pot for occasional collection, no library checkouts, no baggage screening at airports, no ticket checks at train stations, no traffic lights, etc. Would there be no change in the volume of crime and disorder? (Tilley and Laycock, 2002:31).

1.3: Scientific Theories - Biology

People have always been curious about the natural world, including themselves and their behavior (in fact, this is probably why you are studying psychology in the first place). Science grew out of this natural curiosity and has become the best way to achieve detailed and accurate knowledge. Keep in mind that most of the phenomena and theories that fill psychology textbooks are the products of scientific research. In a typical introductory psychology textbook, for example, one can learn about specific cortical areas for language and perception, principles of classical and operant conditioning, biases in reasoning and judgment, and people’s surprising tendency to obey those in positions of authority. And scientific research continues because what we know right now only scratches the surface of what we can know.

The Three Goals of Science

The first and most basic goal of science is to describe. This goal is achieved by making careful observations. As an example, perhaps I am interested in better understanding the medical conditions that medical marijuana patients use marijuana to treat. In this case, I could try to access records at several large medical marijuana licensing centers to see which conditions people are getting licensed to use medical marijuana. Or I could survey a large sample of medical marijuana patients and ask them to report which medical conditions they use marijuana to treat or manage. Indeed, research involving surveys of medical marijuana patients has been conducted and has found that the primary symptom medical marijuana patients use marijuana to treat is pain, followed by anxiety and depression (Sexton, Cuttler, Finnell, & Mischley, 2016). [1] .

The second goal of science is to predict. Once we have observed with some regularity that two behaviors or events are systematically related to one another we can use that information to predict whether an event or behavior will occur in a certain situation. Once I know that most medical marijuana patients use marijuana to treat pain I can use that information to predict that an individual who uses medical marijuana likely experiences pain. Of course, my predictions will not be 100% accurate but if the relationship between medical marijuana use and pain is strong then my predictions will have greater than chance accuracy.

The third and ultimate goal of science is to explain. This goal involves determining the causes of behavior. For example, researchers might try to understand the mechanisms through which marijuana reduces pain. Does marijuana reduce inflammation which in turn reduces pain? Or does marijuana simply reduce the distress associated with pain rather than reducing pain itself? As you can see these questions tap at the underlying mechanisms and causal relationships.

Basic versus Applied Research

Scientific research is often classified as being either basic or applied. Basic research in psychology is conducted primarily for the sake of achieving a more detailed and accurate understanding of human behavior, without necessarily trying to address any particular practical problem. The research of Mehl and his colleagues falls into this category. Applied research is conducted primarily to address some practical problem. Research on the effects of cell phone use on driving, for example, was prompted by safety concerns and has led to the enactment of laws to limit this practice. Although the distinction between basic and applied research is convenient, it is not always clear-cut. For example, basic research on sex differences in talkativeness could eventually have an effect on how marriage therapy is practiced, and applied research on the effect of cell phone use on driving could produce new insights into basic processes of perception, attention, and action.


The differences in the design of the Dutch and Italian research evaluation frameworks are related to the question of how bibliometric expertise has been institutionalized in each country. From a theoretical point of view, research assessments can be understood as an intrusion upon reputational control that operates within intellectual fields (Whitley, 2000, 2007). However, research assessments do not replace reputational control, but are an additional institutional layer of work control. The sociological question is how this new institutional layer operates.

The Dutch system follows a model of bibliometric professionalism. In the Netherlands, there is a client relationship between universities and research institutes, with a legally enforced demand for regular performance evaluation on one side and primarily one contract research organization, the CWTS, on the other, providing bibliometric assessment as a professional service. In a study on the jurisdiction of bibliometrics, Petersohn and Heinze (2018) investigated the history of the CWTS, which developed as an expert organization in the context of Dutch science and higher education policies beginning in the 1970s. Even though bibliometrics are not used by all Dutch research organizations or for all disciplines, and some potential clients are satisfied with internet-based “ready-made indicators,” the current SEP sustains a continuous demand for bibliometric assessment. In the 2000s, the CWTS became an established provider, exporting assessment services to clients across several European countries and more widely influencing methodological practices within the field of bibliometric experts (Jappe, 2019).

Regarding professional autonomy, there are two important points to consider in the Dutch system. First, the additional layer of control via the SEP is introduced at the level of the research organization, in Whitley’s (2000) terms, at the level of the employing organization, rather than the intellectual field or discipline. Thus, the purpose of research evaluation is to inform the university or institute leadership about the organization’s strengths and weaknesses regarding research performance. It is the organization’s board that determines its information needs and commissions evaluations. In this way, the role of the employing organization is strengthened vis-à-vis scientific elites from the intellectual field, as the organizational leadership obtains relatively objective information on performance that can be understood and used by nonexperts in their respective fields. However, this enhancement of work control seems to depend on a high level of acceptance of the bibliometric services by Dutch academic elites as professionally sound and nonpartisan information gathering. As Petersohn and Heinze (2018) emphasized, professional bibliometricians in the Netherlands have claimed a jurisdiction that is subordinate to peer review.

This leads to the second point, which is related to the professional autonomy of bibliometrics as a field of experts. In the Dutch system, science and higher education policies have helped create and sustain an academic community of bibliometricians in addition to the expert organization (i.e., the CWTS). The development of bibliometric methods is left to these bibliometric professionals. Neither state agencies nor the universities, as employing organizations, claim expertise in bibliometric methodology. On the other hand, for a professional organization to gain social acceptance of its claims of competence, the CWTS is obliged to closely interact with its clients in order to determine the best way to serve their information needs. Thus, the professional model of bibliometric assessment in the Netherlands strengthens the leadership of employing organizations and supports the development of a subordinate professional jurisdiction of bibliometrics with a certain degree of scientific autonomy. The model of bibliometric professionalism seems to have contributed to the comparatively broad acceptance of quantitative performance evaluation in the Dutch scientific community.

In stark contrast, the Italian system follows a centralized bureaucratic model that co-opts academic elites. In Italy, bibliometric assessment is part of a central state program implemented in 14 disciplinary divisions of public research employment. Reputational control of academic work is taken into account insofar as evaluation is carried out by members of a committee representing disciplinary macroareas. It is the responsibility of these evaluation committees to determine the details of the bibliometric methodology and evaluation criteria appropriate for their area of discipline, whereas ANVUR, as the central agency, specifies a common methodological approach to be followed by all disciplines (Anfossi, Ciolfi, et al., 2016). In this way, the Italian state cooperates with elites from intellectual fields in order to produce the information required for performance comparisons within and across fields. VQR I comprised 14 committees with 450 professorial experts VQR II comprised 16 committees with 436 professorial experts.

When comparing the two systems, two points are notable regarding the development of a new professional field. First, in the Italian system, an additional layer of work control was introduced at the level of a state agency that determines faculty rankings and at the MIUR, which is responsible for the subsequent budget allocation. Arguably, the role of organizational leadership at universities and research institutes is not strengthened, but circumscribed, by this centralized evaluation program. All public research organizations are assessed against the same performance criteria and left with the same choices to improve performance. Administrative, macrodisciplinary divisions are prescribed as the unitary reference for organizational research performance. Furthermore, although the VQR provides rectors and deans with aggregated information concerning the national ranking positions of their university departments, access to individual performance data is limited to the respective scientists. Thus, the VQR is not designed to inform leadership about the strengths of individuals and groups within their organization. This could be seen as limiting the usefulness of the VQR from a leadership perspective. In addition, the lack of transparency could give rise to concerns about the fairness of individual performance assessments. There seems to be no provision for ex-post validity checks or bottom-up complaints on the part of the research organization or at the individual level. This underlines the top-down, central planning logic of the VQR exercise.

The second point relates to the professionalism of bibliometrics. Italy has expert organizations with bibliometric competence, such as the Laboratory for Studies in Research Evaluation (REV lab) at the Institute for System Analysis and Computer Science of the Italian Research Council (IASI-CNR) in Rome. However, the design and implementation of the national evaluation exercise has remained outside their purview. For example, REV lab took a very critical stance with regard to the VTR and VQR I. Abramo et al. (2009) criticized the fact that many Italian research organizations failed to select their best research products for submission to the VTR, as judged by an ex-post bibliometric comparison of submitted and nonsubmitted products. Accordingly, the validity of the VTR was seriously questioned, as their conclusion suggests: “the overall result of the evaluation exercise is in part distorted by an ineffective initial selection, hampering the capacities of the evaluation to present the true level of scientific quality of the institutions” (Abramo et al., 2009, p. 212).

The VQR has been criticized by the same authors for evaluating institutional performance on the basis of a partial product sample that does not represent the total institutional productivity and covers different fields to different degrees (Abramo & D’Angelo, 2015). The combination of citation count and journal impact developed by ANVUR was also criticized as being methodically flawed (Abramo & D’Angelo, 2016). This criticism is further substantiated by the methodological design developed by ANVUR for bibliometric-based product ratings clearly deviating from the more common approaches in bibliometric evaluation practice in Europe (Jappe, 2019). Reportedly, the VTR/VQR evaluation framework was introduced by the state against strong resistance from Italian university professors (Geuna & Piolatto, 2016). As shown by Bonaccorsi (2018), the involved experts have exerted great effort to build acceptance of quantitative performance assessment among scientific communities outside the natural and engineering fields.

In summary, the centralized model of bibliometric assessment in Italy severely limits university autonomy by directly linking centralized, state-organized performance assessment and base funding allocation. Although the autonomy of reputational organizations is respected in the sense that intellectual elites are co-opted into groups of evaluating experts, evaluative bibliometricians are not involved as independent experts. In contrast to the situation in the Netherlands, the current Italian research evaluation framework has not led to the development of a professional jurisdiction of bibliometrics.

The Cosmological Pendulum

Universal Constants

Zeitgeist is a German word that means literally the spirit of the times. It can also refer to a trend of thought and feeling during a period. It describes the general mood of a culture or society based on one or many influences coming from science, religion, art, politics, or even economics.

I don't think I have to reiterate for you again the two major ways in which the study of cosmology can be approached, I'm sure you remember what they are. In our present day, these two methods have manifested, and in some cases crystallized into two distinct areas of science: experimentation and mathematical theory. Theorists often have nothing to do with actual experimentation and the same can be said of experimenters. And it is this distinction that has been a source of disagreement between various scientific groups who put forth one view of the origin of the universe over another. To see exactly what I'm talking about, let's trace the development of the big bang theory through its various stages. Along the way you'll get a chance to meet an opposing theory, and examine some of the reasons why the big bang was developed in the first place.

Science as a methodology likes to see itself as a revealer of the true nature of the universe, as sort of a seer that can look beneath the veil of appearance. Yet science is practiced by scientists, human beings who bring with themselves a whole set of predispositions, values and beliefs. And as in any cross section of our society, some will be seriously invested in their positions and viewpoints, taking themselves rather seriously and purporting the correctness of their views. Of course, there are as many who don't take this stance and seek to move beyond any personal attachment to who they are and what they've discovered.

Universal Constants

Ex nihilo is a Latin term that translated means out of nothing. It was an idea presented by St. Augustine that became Church doctrine later on. It is his philosophical explanation of how God created everything out of nothing, which interestingly enough can be applied to the big bang as well. Where did everything contained in the big bang come from and why did it bang in the first place?

Much of the history of cosmology and its theories are a reflection of these types of people and the cultures they lived in. Often the most widely accepted theory becomes exactly that, because of the forceful personality behind the ideas. And while science tries to remain free of influence from things outside of it, the scientists who practice it are still a product of the culture and the times in which they live. In other words, in relation to the theories in cosmology, whether the universe has always existed or began with a bang, can't be separated from the influence of the zeitgeist, or spirit of the times. While there isn't enough time to go back through history in detail and show you how the cosmological pendulum has swung from one theory to the other, I can give you a rough outline and a few examples of some time periods in which this occurred. Just remember that there are always many factors impacting how any specific paradigm develops.

  • In ancient Greece the two basic concepts of the empirical (observation and practical application) and deductive (theoretical and mathematical) methods were intimately linked to the conflict between free citizens and the slave populace. The empirical system developed alongside the free craftsman and traders, while the deductive method, which can disregard observation and practical application, arose with the slave master's disdain for manual labor.
  • The Ptolemaic system was strongly influenced by the deductive method (theory and math as opposed to observation). Also at this time, we find the introduction of today's central theme in cosmology, the origin of the universe out of nothing. This ideology was developed out of the somewhat pessimistic and authoritarian worldviews of two founding Church Fathers, Tertullian and St. Augustine. The doctrine of creation ex nihilo served as the basis for a religious social system that saw the world as decaying from a perfect beginning to an ignominious end.
  • During the rise of science, two central concepts of medieval cosmology were overthrownthe idea of a decaying universe, finite in space and time, and the belief that the world could be known through reason and authority. The deductive, finite Ptolemaic system was replaced with the empirical, eternal, and infinite universe that was evolving by natural processes. It was a universe knowable by observation and experiment. The triumph of science was linked to the overthrow of the feudal system, out of which developed free labor and a society of merchants, craftsmen, and free peasants who questioned authoritarian powerreligious, political, and economic.
  • Today's view of cosmology is much closer to the systems of Ptolemy and Augustine than Galileo and Kepler. The big bang universe is a finite one that will eventually end in either the big chill or the big crunch, (we'll examine both of these theories in Supersymmetry, Superstrings, and Holograms) which like the medieval cosmos is finite in time. The universe of popular cosmology is the product of a single unique event, dissimilar from anything else that has ever occurredjust as the medieval universe was seen as a product of creation.

And finally just to show you how what I outlined above can be revealed in the lives of the people living at some of those times, here are a few quotes from some famous people.

What makes God comprehensible is that he cannot be comprehended.

If I can't laugh in heaven, I don't want to go there.

Religion teaches men how to go to heaven, not how the heavens go.

The most incomprehensible thing about the universe is that it is comprehensible.

The more the universe seems comprehensible, the more it also seems pointless.

We may now be near the end of the search for the ultimate laws of nature.

Excerpted from The Complete Idiot's Guide to Theories of the Universe 2001 by Gary F. Moring. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc.

Just 1/3 of Americans "believe" in the theory of evolution by natural selection

The fact that evolution is a touchy subject in the United States isn’t exactly news, but recent results from the Pew Research Center have indicated that, with a margin of error of +/- 3%, only about one third of American adults believe in evolution by natural selection - a process that is accepted by 99% of researchers working in the life sciences.

Poll results show that over a third of adults in this country believe that humans have always existed in their present form, including belief in Creationism as it is lined out in religious texts.

While 60% of adults acknowledge evolution, only a little over half of those (32% of all adults) accept the scientific explanation for evolution. While 24% of all adults accept evolutionary change over time, they believe that it has been directed by a supreme being. This hypothesis is known as Intelligent Design and was conceived as a means of reconciling faith-based ideals with observable scientific facts. Seven percent of those polled did not have an opinion either way.

There are distinct differences between political affiliation and regard for science. Those who identify as Democrats have increased acceptance of evolution, going from 64% up to 67% since 2009. Independents have decreased their evolution acceptance slightly, falling from 67% to 65% within that same timeframe. The biggest decline came from self-identified Republicans. In 2009, 54% of Republicans acknowledged evolution, though that number has plummeted to 43% in the latest poll.

The poll results also indicated that age and education also plays a major role in evolution of acceptance. While only 49% of adults over age 65 accept evolution, younger people have thankfully bucked that trend. Among those who are 18-29 years old, 68% of Americans accept evolution. Evolution is accepted by 51% of those with a high school degree at most, though that number jumps to 72% among those who have earned a college degree.

While these figures apply to views about human evolution, every single demographic has higher answers in regards to the evolution of animals.

The most astonishing part of the evolution debate in the United States is that it is not a scientific controversy whatsoever. The acceptance of evolution by natural selection in the US (32%) is more on par with਌ountries in the Middle East, while most European countries range from 70-82%.

Don't Be Taken in by The Nonsense Science of "Cell Memory Theory"

According to a story doing the rounds on social media, organ transplant patients can take on the personalities of their donors. Don't believe the hype.

A story that appeared on science news website Medical Daily in July 2013 has once again been doing the rounds on social media. The article claims that patients who have had transplants have been known to take on the personalities of their donors. The claim is based on the idea that cells have memories - an idea that is normally the preserve of quacks and homeopaths. In a rather spectacular own goal, while suggesting cellular memory "theory" to be true, the article actually inadvertently links to a skeptical website debunking the concept.

The article relies on anecdotal evidence and a couple of very small retrospective studies of heart transplant patients. The first of the studies, published in a journal called the Journal of Near-Death Studies only had ten participants, including a patient only 7 months old at the time of their surgery. The study draws whopping conclusions from incredibly scant evidence, but a close reading provides plenty of hints to the array of possible confounding factors. The case of a five year old patient described in the study for example, reads as follows:

The donor was a 3-year-old boy who fell from an apartment window. The recipient was a 5-year-old boy with septal defect and cardiomyopathy. The donor’s mother reported:

It was uncanny. When I met the family and Daryl [the recipient] at the transplant meeting, I broke into tears. We went up to the giving tree where you hand a token symbolizing your donor. I was already crying when my husband told me to look at the table we were passing. It was the [recipient’s family] with Daryl sitting there. I knew it right away. Daryl smiled at me exactly like Timmy [her son, the donor] did. After we talked for hours with Daryl’s parents, we were comforted. It somehow just didn't seem strange at all after a while. When we heard that Daryl had made up the name Timmy and got his age right, we began to cry. But they were tears of relief because we knew that Timmy’s spirit was alive.

I gave the boy a name. He’s younger than me and I call him Timmy. He’s just a little kid. He’s a little brother like about half my age. He got hurt bad when he fell down. He likes Power Rangers a lot, I think, just like I used to. I don’t like them anymore, though. I like Tim Allen on “Tool Time,” so I called him Tim. I wonder where my old heart went, too. I sort of miss it. It was broken, but it took care of me for a while.

The recipient’s father reported:

Daryl never knew the name of his donor or his age. We didn’t know, either, until recently. We just learned that the boy who died had fallen from a window. We didn't even know his age until now. Daryl had it about right. Probably just a lucky guess or something, but he got it right. What is spooky, though, is that he not only got the age right and some idea of how he died, he got the name right. The boy’s name was Thomas, but for some reason his immediate family called him “Tim.”

The recipient’s mother added:

Are you going to tell him the real Twilight Zone thing? Timmy fell trying to reach a Power Ranger toy that had fallen on the ledge of the window. Daryl won’t even touch his Power Rangers any more.

The evidence in this study is plainly anecdotal and there are clear explanations for the coincidences - for example the boy who suddenly stopped playing with power ranger toys probably simply grew out of playing with power ranger toys. The study however purported to find "changes in food, music, art, sexual, recreational, and career preferences in addition to name associations and sensory experiences" - a finding that is clearly likely to be due to random chance given the tiny sample size and cherry-picked cases.

The other study was larger with 47 patients, but this study found that 79% of the participants felt their personality hadn't changed, 15% felt their personality had changed because of the life-threatening event (the elephant in the room - which common sense suggests may be the real reason for any personality changes following life-threatening surgery). Only 6% (three patients) felt their personality had changed due to their new heart — a finding that could clearly be due to either random chance, or the patients misattributing the real cause of any change in their personality. Both of the studies are approximately two decades old, if this is the best evidence for a claim that would have such profound implications for our understanding of the workings of the human body, then I think we can safely assume the theory is bunk.

Rest assured, if you are ever unlucky enough to be in a position where you need to have an organ transplanted, there is no remotely reliable scientific evidence that you will suddenly take on the personality of your donor. There is however some evidence that you might be able to take on their artistic abilities. When I say some evidence, I mean one Daily Mail article - and when I say artistic abilities, I mean the newfound ability to color in very, very badly. You couldn't make this up. The claim is sheer nonsense of the highest order.


Experiment: An operation which can produce some well-defined outcomes, is called an Experiment.

Example: When we toss a coin, we know that either head or tail shows up. So, the operation of tossing a coin may be said to have two well-defined outcomes, namely, (a) heads showing up and (b) tails showing up.

Random Experiment: When we roll a die we are well aware of the fact that any of the numerals 1,2,3,4,5, or 6 may appear on the upper face but we cannot say that which exact number will show up.

Such an experiment in which all possible outcomes are known and the exact outcome cannot be predicted in advance, is called a Random Experiment.

Sample Space: All the possible outcomes of an experiment as an whole, form the Sample Space.

Example: When we roll a die we can get any outcome from 1 to 6. All the possible numbers which can appear on the upper face form the Sample Space(denoted by S). Hence, the Sample Space of a dice roll is S=

Outcome: Any possible result out of the Sample Space S for a Random Experiment is called an Outcome.

Example: When we roll a die, we might obtain 3 or when we toss a coin, we might obtain heads.

Event: Any subset of the Sample Space S is called an Event (denoted by E). When an outcome which belongs to the subset E takes place, it is said that an Event has occurred. Whereas, when an outcome which does not belong to subset E takes place, the Event has not occurred.

Example: Consider the experiment of throwing a die. Over here the Sample Space S=<1,2,3,4,5,6>. Let E denote the event of 'a number appearing less than 4.' Thus the Event E=<1,2,3>. If the number 1 appears, we say that Event E has occurred. Similarly, if the outcomes are 2 or 3, we can say Event E has occurred since these outcomes belong to subset E.

Trial: By a trial, we mean performing a random experiment.

Example: (i) Tossing a fair coin, (ii) rolling an unbiased die [4]

When dealing with experiments that are random and well-defined in a purely theoretical setting (like tossing a fair coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. For example, tossing a fair coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability:

    assign numbers to describe some objective or physical state of affairs. The most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome when the experiment is repeated indefinitely. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. [5] A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, even if it is performed only once. assign numbers per subjective probability, that is, as a degree of belief. [6] The degree of belief has been interpreted as "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E." [7] The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some (subjective) prior probability distribution. These data are incorporated in a likelihood function. The product of the prior and the likelihood, when normalized, results in a posterior probability distribution that incorporates all the information known to date. [8] By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions, regardless of how much information the agents share. [9]

The word probability derives from the Latin probabilitas, which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which in contrast is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference. [10]

The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues [note 2] are still obscured by the superstitions of gamblers. [11]

According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." [12] However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence. [13]

The early form of statistical inference were developed by Middle Eastern mathematicians studying cryptography between the 8th and 13th centuries. Al-Khalil (717–786) wrote the Book of Cryptographic Messages which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Al-Kindi (801–873) made the earliest known use of statistical inference in his work on cryptanalysis and frequency analysis. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis. [14]

The sixteenth-century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes [15] ). Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. [16] Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. [17] See Ian Hacking's The Emergence of Probability [10] and James Franklin's The Science of Conjecture [18] for histories of the early development of the very concept of mathematical probability.

The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. [19] The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve.

The first two laws of error that were proposed both originated with Pierre-Simon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the error—disregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error. [20] The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old." [20]

Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.

Adrien-Marie Legendre (1805) developed the method of least squares, and introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). [21] In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,

where h is a constant depending on precision of observation, and c is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). [ citation needed ] Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula [ clarification needed ] for r, the probable error of a single observation, is well known.

In the nineteenth century, authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.

In 1906, Andrey Markov introduced [22] the notion of Markov chains, which played an important role in stochastic processes theory and its applications. The modern theory of probability based on the measure theory was developed by Andrey Kolmogorov in 1931. [23]

On the geometric side, contributors to The Educational Times were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin). [24] See integral geometry for more info.

Like other theories, the theory of probability is a representation of its concepts in formal terms—that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain.

There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see also probability space), sets are interpreted as events and probability as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.

There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the usually-understood laws of probability.

Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis, and financial regulation.

An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict. [25]

In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biological Punnett squares). As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play. [26]

Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty. [27]

The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory.

Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment, sometimes denoted as Ω . [28] The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset <1,3,5>is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, <1,3,5>is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred.

A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event <1,2,3,4,5,6>) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events <1,6>, <3>, and <2,4>), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events. [29]

The probability of an event A is written as P ( A ) , [28] [30] p ( A ) , or Pr ( A ) >(A)> . [31] This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure.

The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as A ′ , A c > , [28] A ¯ , A ∁ , ¬ A ,A^, eg A> , or ∼ A A> its probability is given by P(not A) = 1 − P(A) . [32] As an example, the chance of not rolling a six on a six-sided die is 1 – (chance of rolling a six) = 1 − 1 6 = 5 6 <6>>=< frac <5><6>>> . For a more comprehensive treatment, see Complementary event.

If two events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as P ( A ∩ B ) . [28]

Independent events Edit

If two events, A and B are independent then the joint probability is [30]

For example, if two coins are flipped, then the chance of both being heads is 1 2 × 1 2 = 1 4 <2>> imes < frac <1><2>>=< frac <1><4>>> . [33]

Mutually exclusive events Edit

If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events.

If two events are mutually exclusive, then the probability of both occurring is denoted as P ( A ∩ B ) and

If two events are mutually exclusive, then the probability of either occurring is denoted as P ( A ∪ B ) and

P ( A or B ) = P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A ∩ B ) = P ( A ) + P ( B ) − 0 = P ( A ) + P ( B ) >B)=P(Acup B)=P(A)+P(B)-P(Acap B)=P(A)+P(B)-0=P(A)+P(B)>

For example, the chance of rolling a 1 or 2 on a six-sided die is P ( 1 or 2 ) = P ( 1 ) + P ( 2 ) = 1 6 + 1 6 = 1 3 . >2)=P(1)+P(2)=< frac <1><6>>+< frac <1><6>>=< frac <1><3>>.>

Not mutually exclusive events Edit

If the events are not mutually exclusive then

Conditional probability Edit

Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P ( A ∣ B ) , [28] and is read "the probability of A, given B". It is defined by [34]

Inverse probability Edit

Summary of probabilities Edit

In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon), (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness and roundness of the ball, variations in hand speed during the turning and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant 6.02 × 10 23 ) that only a statistical description of its properties is feasible.

Probability theory is required to describe quantum phenomena. [35] A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice". [36] Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality. [37] In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes.

How the Big Bang Theory Works

Because of the limitations of the laws of science, we can't make any guesses about the instant the universe came into being. Instead, we can look at the period immediately following the creation of the universe. Right now, the earliest moment scientists talk about occurs at t = 1 x 10 -43 seconds (the "t" stands for the time after the creation of the universe). In other words, take the number 1.0 and move the decimal place to the left 43 times.

Cambridge University refers to the study of these earliest moments as quantum cosmology [source: Cambridge University]. At the earliest moments of the big bang, the universe was so small that classical physics didn't apply to it. Instead, quantum physics were in play. Quantum physics deal with physics on a subatomic scale. Much of the behavior of particles on the quantum scale seems strange to us, because the particles appear to defy our understanding of classical physics. Scientists hope to discover the link between quantum and classical physics, which will give us a lot more information about how the universe works.

At t = 1 x 10 -43 seconds, the universe was incredibly small, dense and hot. This homogenous area of the universe spanned a region of only 1 x 10 -33 centimeters (3.9 x 10 -34 inches). Today, that same stretch of space spans billions of light years. During this phase, big bang theorists believe, matter and energy were inseparable. The four primary forces of the universe were also a united force. The temperature of this universe was 1 x 10 32 degrees Kelvin (1 x 10 32 degrees Celsius , 1.8 x 10 32 degrees Fahrenheit). As tiny fractions of a second passed, the universe expanded rapidly. Cosmologists refer to the universe's expansion as inflation. The universe doubled in size several times in less than a second [source: UCLA].

As the universe expanded, it cooled. At around t = 1 x 10 -35 seconds, matter and energy decoupled. Cosmologists call this baryogenesis -- baryonic matter is the kind of matter we can observe. In contrast, we can't observe dark matter, but we know it exists by the way it affects energy and other matter. During baryogenesis, the universe filled with a nearly equal amount of matter and anti-matter. There was more matter than anti-matter, so while most particles and anti-particles annihilated each other, some particles survived. These particles would later combine to form all the matter in the universe.

A period of particle cosmology followed the quantum age. This period starts at t = 1 x 10 -11 seconds. This is a phase that scientists can recreate in lab conditions with particle accelerators. That means that we have some observational data on what the universe must have been like at this time. The unified force broke down into components. The forces of electromagnetism and weak nuclear force split off. Photons outnumbered matter particles, but the universe was too dense for light to shine within it.

Next came the period of standard cosmology, which begins .01 second after the beginning of the big bang. From this moment on, scientists feel they have a pretty good handle on how the universe evolved. The universe continued to expand and cool, and the subatomic particles formed during baryogenesis began to bond together. They formed neutrons and protons. By the time a full second had passed, these particles could form the nuclei of light elements like hydrogen (in the form of its isotope, deuterium), helium and lithium. This process is known as nucleosynthesis. But the universe was still too dense and hot for electrons to join these nuclei and form stable atoms.

That's a busy first second. Next we'll find out what happened over the next 13 billion years.

Saying that the universe is homogeneous and isotropic is another way of saying that every location in the universe is the same as every other one, and that there's no special or central spot for the universe. It's often called the Copernican or cosmological principle.

Watch the video: Lesson. Research failures (August 2022).