The branch of physics that studies the internal structure of atoms. Atoms, originally thought to be indivisible, are complex systems. They have a massive nucleus of protons and neutrons, around which electrons move in empty space. Atoms are very small - their dimensions are about 10 –10 –10 –9 m, and the dimensions of the nucleus are still about 100,000 times smaller (10 –15 –10 –14 m). Therefore, atoms can only be “seen” indirectly, in an image with very high magnification (for example, using a field-emission projector). But even in this case, the atoms cannot be seen in detail. Our knowledge of their internal structure is based on a huge amount of experimental data, which indirectly but convincingly supports the above.

Ideas about the structure of the atom changed radically in the 20th century. influenced by new theoretical ideas and experimental data. There are still unresolved questions in the description of the internal structure of the atomic nucleus, which are the subject of intensive research. The following sections outline the history of the development of ideas about the structure of the atom as a whole; a separate article is devoted to the structure of the nucleus ( ATOMIC NUCLEUS STRUCTURE), since these ideas developed largely independently. The energy required to study the outer shells of an atom is relatively small, on the order of thermal or chemical energy. For this reason, electrons were experimentally discovered long before the discovery of the nucleus.

The nucleus, despite its small size, is very strongly bound, so it can be destroyed and studied only with the help of forces millions of times more intense than the forces acting between atoms. Rapid progress in understanding the internal structure of the nucleus began only with the advent of particle accelerators. It is this huge difference in size and binding energy that allows us to consider the structure of the atom as a whole separately from the structure of the nucleus.

To get an idea of ​​the size of an atom and the empty space it occupies, consider the atoms that make up a drop of water with a diameter of 1 mm. If you mentally enlarge this drop to the size of the Earth, then the hydrogen and oxygen atoms included in the water molecule will have a diameter of 1–2 m. The bulk of the mass of each atom is concentrated in its core, the diameter of which was only 0.01 mm .

The history of the emergence of the most general ideas about the atom usually dates back to the time of the Greek philosopher Democritus (c. 460 - c. 370 BC), who thought a lot about the smallest particles into which any substance could be divided. A group of Greek philosophers who held the view that such tiny indivisible particles existed were called atomists. The Greek philosopher Epicurus (c. 342–270 BC) accepted atomic theory, and in the first century BC. one of his followers, the Roman poet and philosopher Lucretius Carus, outlined the teachings of Epicurus in the poem “On the Nature of Things,” thanks to which it was preserved for subsequent generations. Aristotle (384–322 BC), one of the greatest scientists of antiquity, did not accept the atomic theory, and his views on philosophy and science subsequently prevailed in medieval thinking. Atomistic theory did not seem to exist until the very end of the Renaissance, when purely speculative philosophical reasoning was replaced by experiment.

During the Renaissance, systematic research began in the fields now called chemistry and physics, bringing with it new insights into the nature of “indivisible particles.” R. Boyle (1627–1691) and I. Newton (1643–1727) based their reasoning on the idea of ​​the existence of indivisible particles of matter. However, neither Boyle nor Newton needed a detailed atomic theory to explain the phenomena that interested them, and the results of their experiments did not reveal anything new about the properties of “atoms.”

ATOMIC STRUCTURE

Dalton's laws. The first truly scientific substantiation of the atomic theory, which convincingly demonstrated the rationality and simplicity of the hypothesis that every chemical element consists of the smallest particles, was the work of the English school mathematics teacher J. Dalton (1766–1844), whose article devoted to this problem appeared in 1803 .

Dalton studied the properties of gases, in particular the ratio of the volumes of gases that reacted to form a chemical compound, for example, in the formation of water from hydrogen and oxygen. He established that the ratios of the reacted amounts of hydrogen and oxygen are always ratios of small integers. Thus, when water (H 2 O) is formed, 2.016 g of hydrogen gas reacts with 16 g of oxygen, and when hydrogen peroxide (H 2 O 2) is formed, 32 g of oxygen gas reacts with 2.016 g of hydrogen. The masses of oxygen reacting with the same mass of hydrogen to form these two compounds are related to each other as small numbers:

Based on such results, Dalton formulated his “law of multiple ratios.” According to this law, if two elements combine in different proportions to form different compounds, then the masses of one of the elements combined with the same amount of the second element are related as small whole numbers. According to Dalton's second law, the “law of constant ratios,” in any chemical compound the ratio of the masses of its constituent elements is always the same. A large amount of experimental data, relating not only to gases, but also to liquids and solid compounds, was collected by J. Berzelius (1779–1848), who made accurate measurements of the reacting masses of elements for many compounds. His data confirmed the laws formulated by Dalton and convincingly demonstrated that each element has a smallest unit of mass.

Dalton's atomic postulates had the advantage over the abstract reasoning of the ancient Greek atomists that his laws made it possible to explain and relate the results of real experiments, as well as predict the results of new experiments. He postulated that 1) all atoms of the same element are identical in all respects, in particular, their masses are the same; 2) atoms of different elements have different properties, in particular, their masses are different; 3) a compound, in contrast to an element, contains a certain integer number of atoms of each of its constituent elements; 4) in chemical reactions, a redistribution of atoms can occur, but not a single atom is destroyed or created again. (In fact, as it turned out at the beginning of the 20th century, these postulates are not strictly fulfilled, since atoms of the same element can have different masses, for example, hydrogen has three such varieties, called isotopes; in addition, atoms can undergo radioactive transformations and even completely collapse, but not in the chemical reactions considered by Dalton.) Based on these four postulates, Dalton's atomic theory provided the simplest explanation of the laws of constant and multiple ratios.

Although Dalton's laws underlie all chemistry, they do not determine the actual sizes and masses of atoms. They say nothing about the number of atoms contained in a certain mass of an element or compound. The molecules of simple substances are too small to be weighed individually, so indirect methods must be used to determine the masses of atoms and molecules.

Avogadro's number. In 1811, A. Avogadro (1776–1856) put forward a hypothesis that greatly simplified the analysis of how compounds are formed from elements and established the distinction between atoms and molecules. His idea was that equal volumes of gases at the same temperature and pressure contain the same number of molecules. In principle, a hint of this can be found in the earlier work of J. Gay-Lussac (1778–1850), who established that the ratio of the volumes of gaseous elements entering into a chemical reaction is expressed in whole numbers, although different from the mass ratios obtained by Dalton. For example, 2 liters of hydrogen gas (H 2 molecules), combining with 1 liter of oxygen gas (O 2 molecules), form 1 liter of water vapor (H 2 O molecules).

The true number of molecules in a given volume of gas is extremely large, and until 1865 it could not be determined with acceptable accuracy. However, already in Avogadro's time, rough estimates were made based on the kinetic theory of gases. A very convenient unit for measuring the amount of a substance is the mole, i.e. the amount of a substance in which there are as many molecules as there are atoms in 0.012 kg of the most common isotope of carbon 12 C. One mole of an ideal gas under normal conditions (n.s.), i.e. standard temperature and pressure, occupies a volume of 22.4 liters. Avogadro's number is the total number of molecules in one mole of a substance or in 22.4 liters of gas at ambient conditions. Other methods, such as radiography, give the Avogadro number N 0 more accurate values ​​than those obtained on the basis of kinetic theory. The currently accepted value is 6.0221367×10 23 atoms (molecules) in one mole. Consequently, 1 liter of air contains approximately 3×10 22 molecules of oxygen, nitrogen and other gases.

The important role of Avogadro's number for atomic physics is due to the fact that it allows one to determine the mass and approximate dimensions of an atom or molecule. Since the mass of 22.4 liters of H2 gas is 2.016×10 –3 kg, the mass of one hydrogen atom is 1.67×10 –27 kg. If we assume that in a solid body the atoms are located close to each other, then Avogadro’s number will allow us to approximately estimate the radius r, say, aluminum atoms. For aluminum, 1 mole is equal to 0.027 kg, and the density is 2.7H103 kg/m3. In this case we have

Where r» 1.6×10 –10 m. Thus, the first estimates of Avogadro’s number gave an idea of ​​atomic sizes.

Discovery of the electron. Experimental data related to the formation of chemical compounds confirmed the existence of “atomic” particles and made it possible to judge the small size and mass of individual atoms. However, the actual structure of atoms, including the existence of even smaller particles that make up atoms, remained unclear until J. J. Thomson's discovery of the electron in 1897. Until then, the atom was considered indivisible and the differences in the chemical properties of various elements had no explanation. Even before Thomson's discovery, a number of interesting experiments had been carried out in which other researchers studied electric current in glass tubes filled with gas at low pressures. Such tubes, called Geissler tubes after the German glassblower G. Geissler (1815–1879), who first began making them, emitted a bright glow when connected to the high-voltage winding of an induction coil. These electrical discharges became interested in W. Crookes (1832–1919), who established that the nature of the discharge in the tube changes depending on the pressure, and the discharge completely disappears at high vacuum. Later studies by J. Perrin (1870–1942) showed that the “cathode rays” that cause the glow are negatively charged particles that move in a straight line, but can be deflected by a magnetic field. However, the charge and mass of the particles remained unknown and it was unclear whether all negative particles were the same.

Thomson's great merit was the proof that all the particles that form cathode rays are identical to each other and are part of matter. Using a special type of discharge tube, shown in Fig. 1, Thomson measured the speed and charge-to-mass ratio of cathode ray particles, later called electrons. Electrons flew out of the cathode under the influence of a high-voltage discharge in the tube. Through apertures D And E Only those flying along the axis of the tube passed through.

Rice. 1. CHARGE TO MASS RATIO. Tube used by the English physicist J. Thomson to determine the charge-to-mass ratio for cathode rays. These experiments led to the discovery of the electron.

In normal mode, these electrons hit the center of the luminescent screen. (Thomson's tube was the first "cathode ray tube" with a screen, a precursor to the television picture tube.) The tube also contained a pair of electric capacitor plates which, when energized, could deflect electrons. Electric power F E, acting on the charge e from the electric field E, is given by the expression

F E = eE .

In addition, a magnetic field could be created in the same area of ​​the tube using a pair of current-carrying coils, capable of deflecting electrons in the opposite direction. Force F H, acting from the magnetic field H, proportional to the field strength, particle speed v and her charge e :

F H = Hev .

Thomson adjusted the electric and magnetic fields so that the total deflection of the electrons was zero, i.e. the electron beam returned to its original position. Since in this case both forces F E And F H are equal, the speed of the electrons is given by

v = E/H .

Thomson found that this speed depends on the voltage on the tube V and that the kinetic energy of electrons mv 2/2 is directly proportional to this voltage, i.e. mv 2 /2 = eV. (Hence the term "electron-volt" for the energy acquired by a particle with a charge equal to that of an electron when accelerated by a potential difference of 1 V.) Combining this equation with the expression for the speed of the electron, he found the charge-to-mass ratio:

These experiments made it possible to determine the relationship e /m for an electron and gave an approximate charge value e. Exactly value e was measured by R. Milliken, who in his experiments ensured that charged droplets of oil hung in the air between the plates of a capacitor. Currently, the characteristics of the electron are known with great accuracy:

Thus, the mass of the electron is significantly less than the mass of the hydrogen atom:

Thomson's experiments showed that electrons in electrical discharges can arise from any substance. Since all electrons are the same, elements must differ only in the number of electrons. In addition, the small value of the electron mass indicated that the mass of the atom was not concentrated in them.

Thomson mass spectrograph. Soon the remaining part of the atom with a positive charge could be observed using the same, albeit modified, discharge tube, which made it possible to open the electron. Already the first experiments with discharge tubes showed that if a cathode with a hole is placed in the middle of the tube, then positively charged particles pass through the “channel” in the cathode, causing the fluorescent screen located at the end of the tube opposite from the anode to glow. These positive “channel beams” were also deflected by the magnetic field, but in the opposite direction to the electrons.

Thomson decided to measure the mass and charge of these new beams, also using electric and magnetic fields to deflect the particles. His instrument for studying positive rays, the “mass spectrograph,” is shown schematically in Fig. 2. It is different from the device shown in Fig. 1, in that the electric and magnetic fields deflect particles at right angles to each other, and therefore “zero” deflection cannot be obtained. Positively charged atoms on the path between the anode and cathode can lose one or more electrons, and for this reason can be accelerated to different energies. Atoms of the same type with the same charge and mass, but with some spread in final velocities, will draw a curved line (parabola segment) on a luminescent screen or photographic plate. In the presence of atoms with different masses, heavier atoms (with the same charge) will deviate less from the central axis than lighter ones. In Fig. Figure 3 shows a photograph of parabolas obtained on a Thomson mass spectrograph. The narrowest parabola corresponds to the heaviest singly ionized atom (mercury atom), from which one electron has been knocked out. The two widest parabolas correspond to hydrogen, one to atomic H +, and the other to molecular H 2 +, both of which are singly ionized. In some cases two, three, or even four charges are lost, but atomic hydrogen has never been observed to be ionized more than once. This circumstance was the first indication that the hydrogen atom has only one electron, i.e. it is the simplest of atoms.

Rice. 2. MASS SPECTROGRAPHER, used by Thomson to determine the relative masses of various atoms from the deflection of positive rays in magnetic and electric fields.

Rice. 3. MASS SPECTRA, photographs with the distribution of ionized atoms of five substances, obtained in a mass spectrograph. The greater the mass of atoms, the smaller the deviation.

Other evidence of the complex structure of the atom. At the same time that Thomson and others were experimenting with cathode rays, the discovery of X-rays and radioactivity brought further evidence of the complex structure of the atom. In 1895, V. Roentgen (1845–1923) accidentally discovered mysterious radiation (“ X-rays"), penetrating through the black paper with which he wrapped the Crookes tube while examining the green luminescent region of the electrical discharge. X-rays caused the glow of a remote screen coated with crystalline barium platinocyanide. Roentgen found that various substances of different thicknesses introduced between the screen and the tube weakened the glow, but did not extinguish it completely. This indicated extremely high penetrating ability X-rays. X-ray also established that these rays propagate rectilinearly and are not deflected by electric and magnetic fields. The emergence of such invisible, penetrating radiation from electron bombardment of various materials was something completely new. It was known that visible light from Geissler tubes consisted of individual “spectral lines” with specific wavelengths and was therefore associated with “vibrations” of atoms that had discrete frequencies. An essential feature of the new radiation, which distinguished it from the optical spectra, in addition to its high penetrating ability, was that the optical spectra of elements with a successively increasing number of electrons were completely different from each other, while the spectra X-rays changed very slightly from element to element.

Another discovery related to atomic structure was that atoms of some elements can spontaneously emit radiation. This phenomenon was discovered in 1896 by A. Becquerel (1852–1908). Becquerel discovered radioactivity using uranium salts while studying the luminescence of salts under the influence of light and its relation to the luminescence of glass in an X-ray tube. In one of the experiments, blackening of a photographic plate, wrapped in black paper and located near the uranium salt in complete darkness, was observed. This accidental discovery stimulated an intensive search for other examples of natural radioactivity and experiments to determine the nature of the emitted radiation. In 1898, P. Curie (1859–1906) and M. Curie (1867–1934) discovered two more radioactive elements - polonium and radium. E. Rutherford (1871–1937), having studied the penetrating ability of uranium radiation, showed that there are two types of radiation: very “soft” radiation, which is easily absorbed by the substance and which Rutherford called alpha rays, and more penetrating radiation, which he called beta -rays. Beta rays turned out to be identical to ordinary electrons, or “cathode rays,” arising in discharge tubes. Alpha rays, as it turned out, have the same charge and mass as helium atoms, deprived of two of their electrons. The third type of radiation, called gamma rays, turned out to be similar to X-rays, but had even greater penetrating power.

All these discoveries clearly showed that the atom is not “indivisible.” Not only is it made up of smaller parts (electrons and heavier positive particles), but these and other subparticles appear to be spontaneously emitted during the radioactive decay of heavy elements. In addition, atoms not only emit radiation in the visible region at discrete frequencies, but can also become so excited that they begin to emit “harder” electromagnetic radiation, namely X-rays.

Thomson's model of the atom. J. Thomson, who made a huge contribution to the experimental study of the structure of the atom, sought to find a model that would explain all its known properties. Since the predominant fraction of the mass of an atom is concentrated in its positively charged part, he assumed that the atom is a spherical distribution of positive charge with a radius of approximately 10 -10 m, and on its surface there are electrons held by elastic forces that allow them to oscillate (Fig. 4). The net negative charge of the electrons exactly cancels out the positive charge, so that the atom is electrically neutral. The electrons are on the sphere, but can perform simple harmonic oscillations relative to the equilibrium position. Such oscillations can occur only at certain frequencies, which correspond to narrow spectral lines observed in gas-discharge tubes. Electrons can be knocked out of their positions quite easily, resulting in positively charged "ions" that make up the "channel beams" in mass spectrograph experiments. X-rays correspond to very high overtones of the fundamental vibrations of electrons. Alpha particles produced during radioactive transformations are part of the positive sphere, knocked out of it as a result of some energetic tearing of the atom.

Rice. 4. ATOM, according to Thomson's model. Electrons are held inside a positively charged sphere by elastic forces. Those of them that are on the surface can be “knocked out” quite easily, leaving an ionized atom.

However, this model raised a number of objections. One of them was due to the fact that, as spectroscopists who measured the emission lines discovered, the frequencies of these lines are not simple multiples of the lowest frequency, as should be the case in the case of periodic oscillations of charge. Instead, they move closer together as frequency increases, as if they are approaching a limit. Already in 1885, I. Balmer (1825–1898) managed to find a simple empirical formula connecting the frequencies of lines in the visible part of the hydrogen spectrum:

Where n– frequency, c– speed of light (3×10 8 m/s), n– an integer and R H- a certain constant factor. According to this formula, in a given series of spectral lines of hydrogen there should be no lines with a wavelength l less than 364.56 nm (or higher frequencies) corresponding to n= Ґ. This turned out to be the case, and this became a serious objection to Thomson's model of the atom, although attempts were made to explain the discrepancy by the difference in elastic restoring forces for different electrons.

Based on Thomson's model of the atom, it was also extremely difficult to explain the emission of X-rays or gamma radiation by atoms.

Difficulties in Thomson's atomic model were also caused by the attitude e/m charge to mass for atoms that have lost their electrons (“channel rays”). The simplest atom is a hydrogen atom with one electron and a relatively massive sphere carrying one positive charge. Much earlier, in 1815, W. Prout suggested that all heavier atoms consist of hydrogen atoms, and it would be understandable if the mass of the atom increased in proportion to the number of electrons. However, measurements have shown that the charge-to-mass ratio is not the same for different elements. For example, the mass of a neon atom is about 20 times the mass of a hydrogen atom, while the charge is only 10 units of positive charge (a neon atom has 10 electrons). The situation was as if the positive charge had a variable mass, or there were really 20 electrons, but 10 of them were inside the sphere.

http://www.krugosvet.ru/enc/nauka_i_tehnika/fizika/ATOMA_STROENIE.html

All bodies of living and inanimate nature, despite their diversity, consist of tiny particles - atoms. The first to suggest this is the ancient Greek philosopher Democritus. It was he who called an atom the smallest indivisible particle that forms a substance (atom translated from Greek as “indivisible”). Only at the end of the 19th century. discoveries were made that showed the complexity of the structure of the atom, that atoms decompose into smaller elementary particles and thus are not “atoms” in the Democritus sense. Nevertheless, the term is still used in modern chemistry and physics, despite the discrepancy between its etymology and modern ideas about the structure of the atom.

First ideas about the atom

Democritus believed that if you divide, for example, an apple into two halves, then one of them into two more parts, and continue division in this way until the result of the division ceases to be an apple, then the smallest particle that still retains the property of an apple is an apple atom (i.e. an indivisible part of the apple). He argued that atoms exist forever; they are so small that their size cannot be measured; all atoms are the same, but they differ in appearance (water atoms, for example, are smooth, they are able to roll, and therefore fluidity is characteristic of the liquid; iron atoms have teeth with which they engage each other, which gives iron the properties of a solid). Democritus' ideas were speculative.

A group of Greek philosophers who held the view that such tiny indivisible particles existed were called atomists. Atomism- natural philosophical theory, according to which sensory perceived (material) things consist of chemically indivisible particles - atoms. (In modern physics, the question of atomism is open. Some theorists adhere to atomism, but atoms mean fundamental particles that are further indivisible).

Fundamentals of the atomic theory of the structure of matter

In 1808, physicist Dalton John (1766–1844) revived atomism and proved the reality of the existence of atoms. He wrote: “Atoms are chemical elements that cannot be created anew, divided into smaller particles, or destroyed through any chemical transformations. Any chemical reaction simply changes the order in which atoms are grouped.” John Dalton introduced the concept of “atomic weight”, was the first to calculate the atomic weights (mass) of a number of elements and compiled the first table of their relative atomic weights, thereby laying the foundation for the atomic theory of the structure of matter.

Dalton was one of the most famous and respected scientists of his time, widely known for his pioneering work in various fields of knowledge. He was the first (1794) to conduct research and describe a vision defect that he himself suffered from - color blindness, later named color blindness in his honor; discovered the law of partial pressures (Dalton's law) (1801), the law of uniform expansion of gases when heated (1802), the law of solubility of gases in liquids (Henry-Dalton's law). Established the law of multiple ratios (1803), discovered the phenomenon of polymerization (using the example of ethylene and butylene).

However, the question of the internal structure of atoms did not even arise, since atoms were considered indivisible.

In 1897, the English physicist J. Thomson, studying cathode rays, came to the conclusion that the atoms of any substance contain negatively charged particles, which he called electrons. Thomson's great merit was the proof that all the particles that form cathode rays are identical to each other and are part of matter. He proposed the first model of the atom, the “raisin pudding” model in 1904.

According to Thomson, the positive charge of an atom occupies the entire volume of the atom and is distributed in this volume with a constant density; in a positively charged sphere there are several electrons, so that the atom is like a cake, in which electrons play the role of raisins.

Nuclear model of the atom (planetary)

Rutherford bombarded atoms of heavy elements (gold, silver, copper, etc.) with alpha particles. α particles are fully ionized helium atoms. The electrons that make up the atoms, due to their low mass, cannot noticeably change the trajectory of the α particle. Scattering, that is, a change in the direction of motion of α-particles, can only be caused by the heavy, positively charged part of the atom.

It was found that most α particles pass through a thin layer of metal with little or no deflection. However, a small part of the particles are deflected at significant angles exceeding 30°. Very rare alpha particles (about one in ten thousand) were deflected at angles close to 180°.

This result was completely unexpected even for Rutherford. It was in sharp contradiction with Thomson's model of the atom, according to which the positive charge is distributed throughout the entire volume of the atom. With such a distribution, the positive charge cannot create a strong electric field that can throw α particles back.

These considerations led Rutherford to the conclusion that the atom is almost empty, and all its positive charge is concentrated in a small volume. Rutherford called this part of the atom the atomic nucleus. This is how it arose nuclear model of the atom(planetary):
1. At the center of the atom there is a positively charged nucleus, occupying an insignificant part of the space inside the atom.
2. All the positive charge and almost all the mass of an atom are concentrated in its nucleus (the mass of an electron is 1/1823 amu).
3. Electrons rotate around the nucleus. Their number is equal to the positive charge of the nucleus.

But on the basis of this model it is impossible to explain the fact of the existence of an atom, its stability. After all, the movement of electrons in orbits occurs with acceleration, and quite considerable. According to the laws of electrodynamics, an accelerated electron must lose energy and approach the nucleus. As calculations based on Newtonian mechanics and Maxwellian electrodynamics show, an electron must fall onto the nucleus in an insignificant amount of time. The atom must cease to exist. In reality, nothing like this happens. Atoms are stable and in an unexcited state can exist indefinitely, without emitting electromagnetic waves at all. The conclusion that is inconsistent with experience about the inevitable death of the atom due to loss of energy through radiation is the result of applying the laws of classical physics to the phenomena occurring inside the atom. It follows that the laws of classical physics are unacceptable for atomic-scale phenomena.

Danish physicist Niels Bohr (1885 - 1962) believed that the behavior of microparticles cannot be described by the same laws as macroscopic bodies.
Bohr suggested that the quantities characterizing the microworld should quantize , i.e. they can only take on certain discrete values.
The laws of the microworld are quantum laws! These laws had not yet been established by science at the beginning of the 20th century. Bohr formulated them in the form of three postulates. complementing (and “saving”) Rutherford’s atom. His theory subsequently led to the creation of a coherent theory of the movement of microparticles - quantum mechanics.

Bohr's first postulate states: an atomic system can only be in special stationary, or quantum, states, each of which corresponds to a certain energy E. In a stationary state, the atom does not radiate.
According to Bohr's second postulate light emission occurs when an atom transitions from a stationary state with higher energy to a stationary state with lower energy. The energy of the emitted photon is equal to the energy difference between the stationary states.

Quantum theory of atomic structure

Bohr's theory was replaced by quantum theory, which takes into account the wave properties of the electron and other elementary particles that form the atom.

The modern theory of atomic structure is based on the following basic principles:

1. The electron has a dual (particle-wave) nature. It can behave both as a particle and as a wave; like a particle, an electron has a certain mass and charge; at the same time, a moving electron exhibits wave properties, for example, it is characterized by the ability to diffraction. The electron wavelength λ and its speed v are related by the de Broglie relation:

λ = h/mv, where m is the electron mass.

2. For an electron it is impossible to simultaneously accurately measure its position and speed. The more accurately we measure the speed, the greater the uncertainty in the coordinate, and vice versa. The mathematical expression of the Heisenberg uncertainty principle is the relation

∆x∙m∙∆v > ћ/2,
where ∆x is the uncertainty of the coordinate position, ∆v is the error in measuring the speed.

3. An electron in an atom does not move along certain trajectories, but can be located in any part near the nuclear space, but the probability of its being in different parts of this space is not the same. The space around the nucleus in which the probability of finding an electron is quite high is called orbital.

4. The nuclei of atoms consist of protons and neutrons (generally called nucleons). The number of protons in the nucleus is equal to the atomic number of the element, and the sum of the numbers of protons and neutrons corresponds to its mass number.

The last position was formulated after E. Rutherford discovered the proton in 1920, and the neutron in 1932 by J. Chadwick.

Different types of atoms have a common name - nuclides. It is enough to characterize nuclides by any two numbers from three fundamental parameters: A - mass number, Z - nuclear charge equal to the number of protons, and N - number of neutrons in the nucleus. These parameters are interconnected by the following relationships:

Z = A - N,
N = A - Z,
A= Z + N.

Nuclides with the same Z but different A and N are called isotopes.

The provisions formulated above form the essence of a new theory describing the movement of microparticles - quantum mechanics (mechanics applicable to the movement of ordinary bodies and described by Newton's laws began to be called classical mechanics). The greatest contribution to the development of this theory was made by the Frenchman L. de Broglie, the German W. Heisenberg, the Austrian E. Schrödinger, and the Englishman P. Dirac. Each of these scientists was subsequently awarded a Nobel Prize.

Quantum mechanics is a mathematically very complex theory. But this is not the main difficulty. The processes that quantum mechanics describes - the processes of the microworld - are inaccessible not only to the perception of our senses, but also to the imagination. People are deprived of the opportunity to visually imagine them in full, since they are completely different from those macroscopic phenomena that humanity has observed for millions of years. The human imagination does not create new ones, but only combines known ones, therefore it is almost impossible to describe the behavior of photons and other particles in our macroscopic language.

The discovery of the complex structure of the atom is the most important stage in the development of modern physics. In the process of creating a quantitative theory of atomic structure, which made it possible to explain atomic systems, new ideas were formed about the properties of microparticles, which are described by quantum mechanics.
The idea of ​​atoms as indivisible smallest particles of substances, as noted above, arose in ancient times (Democritus, Epicurus, Lucretius). In the Middle Ages, the doctrine of atoms, being materialistic, did not receive recognition. By the beginning of the 18th century. atomic theory is gaining increasing popularity. By this time, the works of the French chemist A. Lavoisier (1743-1794), the great Russian scientist M.V. Lomonosov and the English chemist and physicist D. Dalton (1766-1844) proved the reality of the existence of atoms. However, at this time the question of the internal structure of atoms did not even arise, since atoms were considered indivisible.
A major role in the development of atomic theory was played by the outstanding Russian chemist D.I. Mendeleev, who in 1869 developed the periodic system of elements, in which for the first time the question of the unified nature of atoms was raised on a scientific basis. In the second half of the 19th century. It has been experimentally proven that the electron is one of the main parts of any substance. These conclusions, as well as numerous experimental data, led to the fact that at the beginning of the 20th century. The question of the structure of the atom seriously arose.
The existence of a natural connection between all chemical elements, clearly expressed in Mendeleev’s periodic system, suggests that the structure of all atoms is based on a common property: they are all closely related to each other.
However, until the end of the 19th century. In chemistry, the metaphysical conviction prevailed that the atom is the smallest particle of simple matter, the final limit of the divisibility of matter. During all chemical transformations, only molecules are destroyed and created again, while atoms remain unchanged and cannot be split into smaller parts.
For a long time, various assumptions about the structure of the atom were not confirmed by any experimental data. Only at the end of the 19th century. discoveries were made that showed the complexity of the structure of the atom and the possibility of transforming some atoms into others under certain conditions. Based on these discoveries, the doctrine of the structure of the atom began to develop rapidly.
The first indirect evidence of the complex structure of atoms was obtained from the study of cathode rays generated during an electrical discharge in highly rarefied gases. The study of the properties of these rays led to the conclusion that they are a stream of tiny particles carrying a negative electrical charge and flying at a speed close to the speed of light. Using special techniques, it was possible to determine the mass of cathode particles and the magnitude of their charge, and to find out that they do not depend either on the nature of the gas remaining in the tube, or on the substance from which the electrodes are made, or on other experimental conditions. Moreover, cathode particles are known only in a charged state and cannot be stripped of their charges and converted into electrically neutral particles: electric charge is the essence of their nature. These particles, called electrons, were discovered in 1897 by the English physicist J. Thomson.
The study of the structure of the atom practically began in 1897-1898, after the nature of cathode rays as a stream of electrons was finally established and the charge and mass of the electron were determined. Thomson suggested first atomic model, imagining the atom as a clump of matter with a positive electrical charge, in which so many electrons are interspersed that it turns it into an electrically neutral formation. In this model, it was assumed that, under the influence of external influences, electrons could oscillate, i.e., move at an accelerated rate. It would seem that this made it possible to answer questions about the emission of light by atoms of matter and gamma rays by atoms of radioactive substances.
Thomson's model of the atom did not assume positively charged particles inside an atom. But how then can we explain the emission of positively charged alpha particles by radioactive substances? Thomson's atomic model did not answer some other questions.
In 1911, the English physicist E. Rutherford, while studying the movement of alpha particles in gases and other substances, discovered a positively charged part of the atom. Further more thorough studies showed that when a beam of parallel rays passes through layers of gas or a thin metal plate, no longer parallel rays emerge, but somewhat diverging ones: alpha particles are scattered, i.e., they deviate from the original path. The deflection angles are small, but there are always a small number of particles (about one in several thousand) that are deflected very strongly. Some particles are thrown back as if they had encountered an impenetrable barrier. These are not electrons - their mass is much less than the mass of alpha particles. Deflection can occur when colliding with positive particles whose mass is of the same order as the mass of alpha particles. Based on these considerations, Rutherford proposed the following diagram of the structure of the atom.
At the center of the atom there is a positively charged nucleus, around which electrons rotate in different orbits. The centrifugal force arising during their rotation is balanced by the attraction between the nucleus and the electrons, as a result of which they remain at certain distances from the nucleus. Since the mass of an electron is negligible, almost the entire mass of an atom is concentrated in its nucleus. The share of the nucleus and electrons, the number of which is relatively small, accounts for only an insignificant part of the total space occupied by the atomic system.
The structure of the atom proposed by Rutherford or, as they usually say, planetary atomic model, easily explains the phenomena of alpha particle deflection. Indeed, the size of the nucleus and electrons is extremely small compared to the size of the entire atom, which is determined by the orbits of the electrons farthest from the nucleus, so most alpha particles fly through atoms without noticeable deflection. Only in cases where the alpha particle comes very close to the nucleus does electrical repulsion cause it to deviate sharply from its original path. Thus, the study of the scattering of alpha particles laid the foundation for the nuclear theory of the atom.

4.2. Bohr's postulates

The planetary model of the atom made it possible to explain the results of experiments on the scattering of alpha particles of matter, but fundamental difficulties arose in justifying the stability of atoms.
The first attempt to construct a qualitatively new - quantum - theory of the atom was made in 1913 by Niels Bohr. He set the goal of linking into a single whole the empirical laws of line spectra, the Rutherford nuclear model of the atom, and the quantum nature of the emission and absorption of light. Bohr based his theory on Rutherford's nuclear model. He suggested that electrons move around the nucleus in circular orbits. Circular motion, even at constant speed, has acceleration. This accelerated movement of charge is equivalent to alternating current, which creates an alternating electromagnetic field in space. Energy is consumed to create this field. The field energy can be created due to the energy of the Coulomb interaction of the electron with the nucleus. As a result, the electron must move in a spiral and fall onto the nucleus. However, experience shows that atoms are very stable formations. It follows from this that the results of classical electrodynamics, based on Maxwell’s equations, are not applicable to intra-atomic processes. It is necessary to find new patterns. Bohr based his theory of the atom on the following postulates.
Bohr's first postulate (postulate of stationary states): in an atom there are stationary (not changing with time) states in which it does not emit energy. Stationary states of an atom correspond to stationary orbits along which electrons move. The movement of electrons in stationary orbits is not accompanied by the emission of electromagnetic waves.
This postulate is in conflict with the classical theory. In the stationary state of an atom, an electron, moving in a circular orbit, must have discrete quantum values ​​of angular momentum.
Bohr's second postulate (frequency rule): when an electron moves from one stationary orbit to another, one photon with energy is emitted (absorbed)

equal to the difference between the energies of the corresponding stationary states (En and Em are, respectively, the energies of the stationary states of the atom before and after radiation/absorption).
The transition of an electron from a stationary orbit number m to a stationary orbit number n corresponds to the transition of an atom from a state with energy Em into a state with energy En (Fig. 4.1).

Rice. 4.1. To an explanation of Bohr's postulates

At En > Em, photon emission occurs (the transition of an atom from a state with higher energy to a state with lower energy, i.e., the transition of an electron from an orbit more distant from the nucleus to a closer one), at En< Еm - его поглощение (переход атома в состояние с большей энергией, т. е, переход электрона на более удаленную от ядра орбиту). Набор возможных дискретных частот

quantum transitions and determines the line spectrum of an atom.
Bohr's theory brilliantly explained the experimentally observed line spectrum of hydrogen.
The successes of the theory of the hydrogen atom were achieved at the cost of abandoning the fundamental principles of classical mechanics, which has remained unconditionally valid for more than 200 years. Therefore, direct experimental proof of the validity of Bohr's postulates, especially the first - on the existence of stationary states - was of great importance. The second postulate can be considered as a consequence of the law of conservation of energy and the hypothesis about the existence of photons.
German physicists D. Frank and G. Hertz, studying the collision of electrons with gas atoms using the retarding potential method (1913), experimentally confirmed the existence of stationary states and the discreteness of atomic energy values.
Despite the undoubted success of Bohr's concept in relation to the hydrogen atom, for which it turned out to be possible to construct a quantitative theory of the spectrum, it was not possible to create a similar theory for the helium atom next to hydrogen based on Bohr's ideas. Regarding the helium atom and more complex atoms, Bohr's theory allowed us to draw only qualitative (albeit very important) conclusions. The idea of ​​certain orbits along which an electron moves in a Bohr atom turned out to be very conditional. In fact, the movement of electrons in an atom has little in common with the movement of planets in orbit.
Currently, with the help of quantum mechanics, it is possible to answer many questions regarding the structure and properties of atoms of any elements.

4.3. Particle-wave properties of microparticles

The universality of the particle-wave concept

The French scientist Louis de Broglie (1892-1987), realizing the symmetry existing in nature and developing ideas about the dual corpuscular-wave nature of light, put forward a hypothesis about universality of wave-particle duality. He argued that not only photons, but also electrons and any other particles of matter, along with corpuscular ones, have wave properties.
According to de Broglie, each microobject is associated, on the one hand, with corpuscular characteristics: energy E and momentum R, and on the other hand, wave characteristics - frequency v and wavelength. The formulas connecting the corpuscular and wave properties of particles are the same as for photons:

E=h; р = h/λ.

The boldness of de Broglie's hypothesis lay precisely in the fact that the given formulas were postulated not only for photons, but also for other microparticles, in particular for those that have a rest mass. Thus, with any particle with momentum, a wave process with a wavelength determined de Broglie's formula:

This formula is valid for any particle with momentum R.
Soon, de Broglie's hypothesis was confirmed experimentally by American physicists K. Davisson (1881-1958) and L. Germer (1896-1971), who discovered that an electron beam scattered from the natural diffraction grating of a nickel crystal gives a distinct diffraction pattern.
De Broglie's experimentally confirmed hypothesis about the wave-particle duality of the properties of matter radically changed the idea of ​​the properties of micro-objects. All microobjects have both corpuscular and wave properties: they have the potential to manifest themselves, depending on external conditions, either in the form of a wave or in the form of a particle.

Principles of uncertainty and additionality

According to the dual corpuscular-wave nature of particles of matter, either wave or corpuscular concepts are used to describe the properties of microparticles. It is impossible to attribute to them all the properties of particles and all the properties of waves. There is a need to introduce some restrictions in the application of the concepts of classical mechanics to objects of the microworld.
In classical mechanics, every particle moves along a certain trajectory, so that at any moment of time its coordinate and momentum are precisely fixed. Microparticles, due to their wave properties, differ significantly from classical particles. One of the main differences is that it is impossible to talk about the movement of a microparticle along a certain trajectory and about the simultaneous exact values ​​of its coordinates and momentum. This follows from wave-particle dualism. Thus, the concept of “wavelength at a given point” has no physical meaning, and since momentum is expressed in terms of wavelength, a microparticle with a certain momentum has a completely uncertain coordinate. And vice versa, if a microparticle is in a state with an exact coordinate value, then its momentum is completely uncertain.
The German physicist W. Heisenberg, taking into account the wave properties of microparticles and the limitations in their behavior associated with the wave properties, came to the conclusion in 1927:
It is impossible to characterize an object of the microworld simultaneously with any predetermined accuracy by both coordinate and momentum. According to Heisenberg uncertainty relation a microparticle (microobject) cannot simultaneously have a coordinate x and a certain momentum p, and the uncertainties of these quantities satisfy the condition
Δx Δp ≥ h
(h is Planck’s constant), i.e. the product of the uncertainties of the coordinate and momentum cannot be less than Planck’s constant.
The inability to simultaneously accurately determine the coordinate and the corresponding impulse component is not due to imperfection of measurement methods or measuring instruments. This is a consequence of the specificity of micro-objects, reflecting the peculiarities of their objective properties, their dual particle-wave nature. The uncertainty relation was obtained by simultaneously using the classical characteristics of the particle’s motion (coordinate, momentum) and the presence of its wave properties. Since in classical mechanics it is accepted that the measurement of coordinates and momentum can be carried out with any accuracy, then the uncertainty relation is thus a quantum limitation on the applicability of classical mechanics to microobjects.
The uncertainty relationship, reflecting the specifics of the physics of microparticles, allows us to assess, for example, to what extent the concepts of classical mechanics can be applied to microparticles, in particular, with what degree of accuracy we can talk about the trajectories of microparticles. It is known that movement along a trajectory is characterized at any moment in time by certain values ​​of coordinates and speed.
For macroscopic bodies, their wave properties do not play any role: the coordinate and velocity of macroscopic bodies can be simultaneously measured quite accurately. This means that the laws of classical mechanics can be used to describe the motion of macrobodies with absolute certainty.
The uncertainty relation has repeatedly been the subject of philosophical discussions, leading some philosophers to its idealistic interpretation: the uncertainty relation, without making it possible to simultaneously accurately determine the coordinates and impulses (velocities) of particles, sets the limit of the cognizability of the world, on the one hand, and the existence of micro-objects outside of space and time - with another. In fact, the uncertainty relation does not set any limit to the knowledge of the microworld, but only indicates how applicable the concepts of classical mechanics are to it.
To describe micro-objects, H. Bohr formulated in 1927 the fundamental position of quantum mechanics - principle of complementarity, according to which obtaining experimental information about some physical quantities that describe a microobject (elementary particle, atom, molecule) is inevitably associated with the loss of information about some other quantities, additional to the first.
Such mutually complementary quantities can be considered, for example, the coordinate of a particle and its speed (or momentum). In the general case, complementary to each other are physical quantities that correspond to operators that do not commute with each other, for example, the direction and magnitude of angular momentum, kinetic and potential energy.
From a physical point of view, the principle of complementarity is often explained (following Bohr) by the influence of a measuring device (microscopic object) on the state of the microobject. When accurately measuring one of the additional quantities (for example, the coordinates of a particle) using an appropriate device, another quantity (momentum) undergoes a completely uncontrolled change as a result of the interaction of the particle with the device. Although this interpretation of the principle of complementarity is confirmed by the analysis of the simplest experiments, from a general point of view it encounters philosophical objections. From the perspective of modern quantum theory, the role of the device in measurements is to “prepare” a certain state of the system. States in which mutually complementary quantities would simultaneously have precisely defined values ​​are fundamentally impossible, and if one of such quantities is precisely defined, then the values ​​of the other are completely uncertain. Thus, in fact, the principle of complementarity reflects the objective properties of quantum systems that are not associated with the observer.

4.4. Probabilistic nature of microprocesses

Probabilistic properties of microparticles

Experimental confirmation of de Broglie's idea about the universality of particle-wave dualism, the limited application of classical mechanics to micro-objects, dictated by the principles of complementarity and uncertainty, as well as the contradiction of a number of experiments used at the beginning of the 20th century. theories led to a new stage in the development of physical concepts of the surrounding world, and in particular the microworld - creation of quantum mechanics, describing the properties of microparticles taking into account their wave characteristics. Its creation and development span the period from 1900 (Planck’s formulation of the quantum hypothesis) to the 20s of the 20th century. and is associated primarily with the works of the Austrian physicist E. Schrödinger, the German physicist W. Heisenberg and the English physicist P. Dirac.
At this time, new fundamental problems arose, in particular the problem associated with understanding the physical nature of de Broglie waves. To clarify this, let us consider the diffraction of microparticles. The diffraction pattern observed for microparticles is characterized by an unequal distribution of fluxes of these particles, scattered or reflected in different directions: a larger number of particles are observed in some directions than in others. From the point of view of wave theory, the presence of maxima in the diffraction pattern means that these directions correspond to the highest intensity of de Broglie waves. At the same time, the intensity of such waves turns out to be greater where there are a larger number of particles, i.e. their intensity at a given point in space determines the number of particles that hit this point. Consequently, the diffraction pattern for microparticles is a manifestation of a statistical (probabilistic) pattern, according to which particles fall into those places where the intensity of de Broglie waves is greatest.
The need for a probabilistic approach to the description of microparticles is an important distinctive feature of quantum theory. Can de Broglie waves be interpreted as probability waves, i.e. can we assume that the probability of detecting microparticles at different points in space changes according to the wave law? This interpretation of de Broglie waves is incorrect, if only because then the probability of detecting a particle at some points in space may be negative, which does not make sense.
To eliminate these difficulties, the German physicist M. Born (1882-1970) in 1926. suggested that according to the wave law, it is not the probability itself that changes, but the amplitude of the probability, called wave function. The description of the state of a microobject using the wave function has a statistical, probabilistic nature: The square of the modulus of the wave function (the square of the modulus of the amplitude of de Broglie waves) determines the probability of finding a particle at a given moment in time in a certain limited volume.
So, in quantum mechanics, the state of microparticles is described in a fundamentally new way - using the wave function, which is the main carrier of information about their corpuscular and wave properties.
The statistical interpretation of de Broglie waves and the Heisenberg uncertainty relation led to the conclusion that the equation of motion in quantum mechanics, which describes the movements of microparticles in various force fields, should be an equation from which the experimentally observed wave properties of particles would follow. The main equation should be the equation regarding the wave function, because it is it, or, more precisely, its square that determines the probability of finding a particle at a given moment in time in a given specific volume. In addition, the required equation must take into account the wave properties of particles, i.e., it must be a wave equation.
The basic equation of quantum mechanics was formulated in 1926 by E. Schrödinger. Schrödinger equation, like many equations of physics, is not derived, but postulated. The correctness of this Schrödinger equation is confirmed by the agreement with experience of the results obtained with its help, which in turn gives it the character of a law of nature.

Principles of causality and correspondence

From the relationship of uncertainties, an idealistic conclusion is sometimes made about the inapplicability of the principle of causality to phenomena occurring in the microworld. This is based on the following considerations. In classical mechanics, according to principle of causality - the principle of classical determinism - based on the known state of the system at some point in time (completely determined by the values ​​of the coordinates and momenta of all particles of the system) and the forces applied to it, one can absolutely accurately describe its state at any subsequent moment. Therefore, classical physics is based on the following understanding of causality: the state of a mechanical system at the initial moment of time with a known law of interaction of particles is the cause, and its state at a subsequent moment is the effect.
On the other hand, microobjects cannot have both a certain coordinate and a certain corresponding projection of momentum at the same time, therefore it is concluded that at the initial moment of time the state of the system is not precisely determined. If the state of the system is not precisely determined at the initial moment of time, then subsequent states cannot be predicted, i.e. the principle of causality is violated. However, no violation of the principle of causality in relation to micro-objects is observed, since in quantum mechanics the concept of the state of a micro-object takes on a completely different meaning than in classical mechanics. In quantum mechanics, the state of a microobject is completely determined by the wave function. Setting the wave function for a given moment in time determines its value at subsequent moments. Thus, the state of a system of microparticles, defined in quantum mechanics, unambiguously follows from the previous state, as required by the principle of causality.
The theory put forward by N. Bohr in 1923 played an important role in the development of quantum mechanical concepts. matching principle: any new, more general theory, which is a development of the classical one, does not completely reject it, but includes the classical theory, indicating the boundaries of its application, and in certain limiting cases the new theory passes into the old one.
Thus, the formulas of kinematics and dynamics of relativistic mechanics transform into the formulas of Newtonian mechanics at speeds much lower than the speed of light. For example, although de Broglie's hypothesis attributes wave properties to all bodies, the wave properties of macroscopic bodies can be neglected and classical Newtonian mechanics can be applied to them.

4.5. Elementary particles

General information

Nuclear physics studies the structure and properties of atomic nuclei. She also studies the interconversions of atomic nuclei that occur as a result of both radioactive decay and various nuclear reactions. Closely related to nuclear physics physics of elementary particles, physics and technology of charged particle accelerators, nuclear energy.
Nuclear physics research is of enormous scientific importance, allowing progress in understanding the structure of matter, and at the same time is extremely important in practical terms (in energy, medicine, etc.).
Elementary particles- the primary, indecomposable particles of which all matter is supposed to be composed. In modern physics, this term is usually used not in its exact meaning, but in a less strict one - to name a large group of tiny particles of matter that satisfy the condition that they are not atoms or atomic nuclei, with the exception of the proton. Elementary particles include protons, neutrons, electrons, photons, pi-mesons, muons, heavy leptons, three types of neutrinos, strange particles (K-mesons, hyperons), various resonances, mesons with hidden charm, “charmed” particles, intermediate vector bosons, etc. - there are several hundred of them, mostly unstable. Their number continues to grow as our knowledge expands. Most of the listed particles do not satisfy the strict definition of elementaryity, since they are composite systems.
The masses of most elementary particles are on the order of the mass of a proton, equal to 1.7 10-24 g. The sizes of a proton, neutron, pi-meson and other hadrons are 10-13 cm, and the electron and muon are not determined, but less than 10-16 cm. The microscopic masses and sizes of elementary particles determine the quantum specificity of their behavior. The most important quantum property of all elementary particles is the ability to be emitted and absorbed when interacting with other particles.

Truly elementary particles

Currently, from a theoretical point of view, the following truly elementary (at this stage of development of science considered indecomposable) particles are known: quarks And leptons(these varieties refer to particles of matter), field quanta(photons, vector bosons, gluons, gravitinos and gravitons), as well as Higgs particles.
Each pair of leptons combines with a corresponding pair of quarks into a quartet called a generation. The properties of particles are repeated from generation to generation, only the masses differ: the second is heavier than the first, the third is heavier than the second. It is assumed that mainly first-generation particles are found in nature, and the rest can be created artificially at charged particle accelerators or through the interaction of cosmic rays in the atmosphere.
Truly elementary particles include quanta of fields created by particles of matter. Massive W bosons are carriers of weak interactions between quarks and leptons. Gluons are carriers of strong interactions between quarks. Like quarks themselves, gluons are not found in free form, but appear at intermediate stages of some reactions. The theory of quarks and gluons is called quantum chromodynamics.
A particle with supposed spin 2 is a graviton. Its existence is theoretically predicted. However, it will be extremely difficult to detect it, since it interacts very weakly with the substance.
Finally, true elementary particles include Higgs particles, or H-mesons, and gravitinos. They have not been experimentally discovered, but their existence is assumed in many modern theoretical models.

Antimatter

Many particles have counterparts in the form of antiparticles, with the same mass, lifetime, spin, but differing in the signs of all charges: electric, baryon, lepton, etc. (electron-positron, proton-antiproton, etc.). The existence of antiparticles was first predicted in 1928 by the English theoretical physicist P. Dirac. From the Dirac equation for the relativistic motion of an electron followed the second solution for its twin - the positron, which has the same mass but a positive electric charge.
The antiparticle positron was first discovered in 1932 in cosmic rays by the American physicist C. Anderson (b. 1905), winner of the 1936 Nobel Prize.
A characteristic feature of the behavior of particles and antiparticles is their annihilation upon collision, i.e. transition into other particles with conservation of energy, momentum, electric charge, etc. A typical example is the mutual destruction of an electron and a positron with the release of energy during the birth of two photons. Annihilation can occur not only during electromagnetic interaction, but also during strong interaction. If at low energies annihilation occurs with the formation of lighter particles, then at high energies heavier particles than the original ones can be born if the total energy of the colliding particles exceeds the threshold for the birth of new ones, equal to the sum of their rest energy.
In strong and electromagnetic interactions there is complete symmetry between particles and antiparticles - all processes occurring with the former are possible and similar for the latter. Like protons and neutrons, their antiparticles can form antinuclei. In principle, one can imagine antiatoms and even large clusters antimatter.

Classification of conditionally elementary particles

Depending on their lifetime, particles are divided into stable(electron, proton, photon and neutrino), quasi-stable(decaying due to electromagnetic and weak interactions, their lifetime is more than 10-20s and resonances(particles decaying due to strong interaction, characteristic lifetime 10-22 - 10-24 s).
In accordance with the four types of fundamental interactions, four types of elementary particles are distinguished: hadrons, participating in all interactions; leptons, not participating only in strong interaction (and neutrinos and electromagnetic); photon- carrier only in electromagnetic interaction, and hypothetical graviton- carrier of gravitational interaction.
Hadrons- a general name for particles that most actively participate in strong interactions. The name comes from the Greek word for “strong, large.” All hadrons are divided into two large groups - baryons and mesons.
Baryons- these are hadrons with half-integer spin. The most famous of them are the proton and the neutron. One of the properties of baryons that distinguishes them from other particles is the presence of a conserved baryon charge.
Mesons- hadrons with integer spin. Their baryon charge is zero. Most of them are extremely unstable and decay within a time of about 10-23. Such short-lived particles cannot leave traces in detectors. Usually their birth is detected by indirect signs. For example, they study the reaction of annihilation of electrons and positrons with the subsequent production of hadrons. By changing the collision energy, they find that at a certain value the hadron yield increases sharply. This fact can be explained by the fact that a particle was born in an intermediate state. Then it instantly disintegrates into other particles, which are registered. Such short-lived particles are called resonances. Most baryons and mesons are resonances.
Hadrons are not truly elementary particles. They have finite sizes and a complex structure. A baryon consists of three quarks, mesons are built from a quark and an antiquark, quarks are held inside hadrons by the gluon field. In principle, the theory allows for the existence of other hadrons, constructed from a larger number or from a single gluon field.
The quark model was originally proposed to taxonomy the overly numerous family of hadrons. This model included quarks of three types or flavors (later it turned out that there were more of them). With the help of quarks, it was possible to separate hadrons into groups called multiplets. Particles of one multiplet have slightly different masses.

4.6. Structure of the atomic nucleus

Nucleon level

About 20 years after Rutherford “saw” the nucleus in the depths of the atom, it was discovered neutron- a particle in all its properties is the same as the nucleus of a hydrogen atom - a proton, but only without an electric charge. The neutron turned out to be extremely convenient for probing the inside of nuclei. Since it is electrically neutral, the electric field of the nucleus does not repel it - accordingly, even slow neutrons can easily approach the nucleus at distances at which nuclear forces begin to manifest themselves. After the discovery of the neutron, the physics of the microworld moved forward by leaps and bounds.
Soon after the discovery of the neutron, two theoretical physicists - the German Werner Heisenberg and the Soviet Dmitry Ivanenko - hypothesized that the atomic nucleus consists of neutrons and protons. The modern understanding of the structure of the nucleus is based on it.
Protons and neutrons are combined by the word nucleon. Protons- these are elementary particles that are the nuclei of atoms of the lightest chemical element - hydrogen. The number of protons in the nucleus is equal to the atomic number of the element in the periodic table and is designated Z (the number of neutrons is N). A proton has a positive electric charge, equal in absolute value to the elementary electric charge. It is approximately 1836 times heavier than an electron. A proton is made up of two And-quarks with charge Q = + 2/3 and one d-quark with Q = - 1/3, connected by the gluon field. It has final dimensions of the order of 10-15 m, although it cannot be imagined as a solid ball, it rather resembles a cloud with a blurred boundary, consisting of created and annihilated virtual particles.
Electric charge neutron is equal to 0, its mass is approximately 940 MeV. Neutron consists of one u-quark and two d-quarks. This particle is stable only in the composition of stable atomic nuclei; a free neutron decays into an electron, a proton and an electron antineutrino. The half-life of a neutron (the time it takes for half the original number of neutrons to decay) is approximately 12 minutes. In matter, neutrons exist in free form for even less time due to their strong absorption by nuclei. Like the proton, the neutron participates in all types of interactions, including electromagnetic ones: with general neutrality, due to its complex internal structure, electric currents exist in it.
In the nucleus, nucleons are bound by a special kind of force - nuclear. One of their characteristic features is short-acting: at distances of the order of 10-15 m or less they exceed any other forces, as a result of which the nucleons do not fly apart under the influence of electrostatic repulsion of like-charged protons. At large distances, nuclear forces very quickly decrease to zero.
The mechanism of action of nuclear forces is based on the same principle as electromagnetic forces - on the exchange of interacting objects with virtual particles.
Virtual particles in quantum theory, these are particles that have the same quantum numbers (spin, electric and baryon charges, etc.) as the corresponding real particles, but for which the usual relationship between energy, momentum and mass does not hold.

Quarks

The quark hypothesis was proposed in 1967 by the American theoretical physicist M. Gell-Mann (b. 1929). Quark- a particle with spin 1/2 and a fractional electric charge, a constituent element of hadrons. This name was borrowed by M. Gell-Man in one of his science fiction novels and means something trivial and strange.
In addition to spin, quarks have two more internal degrees of freedom - “flavor” and “color” (a degree of freedom is an independent possible change in the state of a physical system due to variations in its parameters). Each quark can be in one of three color states, which are conventionally called red, blue and yellow (only for convenience - this has nothing to do with optical properties). In observed hadrons, quarks are combined in such a way that the resulting states do not carry color - they are “colorless”. There are five known aromas and a sixth is suspected. The properties of quarks of different flavors are different.
Ordinary matter consists of lungs And- and d-quarks that make up the nucleons of nuclei. Heavier quarks are created artificially or observed in cosmic rays. Here the words “created” and “observed” cannot be taken literally - not a single quark has been detected in free form, they can only be observed inside hadrons. When trying to knock a quark out of a hadron, the following happens: the escaping quark gives birth on its way from the vacuum to quark-antiquark pairs, arranged in descending order of speed. One of the slow quarks takes the place of the original one, and that one, together with the rest of the born quarks and antiquarks, forms hadrons.

4.7. Nuclear processes

Mass defect and binding energy

The mass of a nucleus is determined by the mass of its constituent neutrons and protons. Since any nucleus consists of Z protons and N = A - Z neutrons, where A is the mass number (the number of nucleons in the nucleus), then, at first glance, the mass of the nucleus should simply be equal to the sum of the masses of protons and neutrons. However, as the measurement results show, the real mass is always less than this amount. Their difference is called mass defectm.
Energy is one of the most important characteristics of any physical process. In nuclear physics, its role is especially great, since the inviolability of the law of conservation of energy allows one to make fairly accurate calculations even in cases where many details of the phenomena remain unknown.
It is possible to break a nucleus into individual nucleons only by introducing into it from the outside in some way an energy no less than that released during its formation. This is the total binding energy of the nucleus Eb. The origin of the mass defect Δm is directly related to it. According to the formula
E St = Δmс2
a decrease in the energy of the system during the formation of a nucleus by some amount must inevitably lead to a decrease in the total mass. Such a change in mass occurs during any processes associated with the transfer of energy. But in the phenomena familiar to us, changes in mass are relatively small and unnoticeable. In nuclear phenomena, due to the great importance of nuclear forces, the change in mass is very significant. Thus, for a neon nucleus, the mass defect is almost 1% of the mass of the nucleus.

Average binding energy of one nucleon in a nucleus

If we divide the amount of energy “lost” during the formation of a nucleus by the total number of nucleons, we obtain the average binding energy per nucleon in the nucleus, or the specific binding energy equal to Eb/A. The specific binding energy depends on the mass number. For most nuclei, the average specific binding energy is approximately the same (with the exception of light and heavy nuclei).
Each nucleon has a limited supply of interaction possibilities, and if this supply has already been used up in connection with two or three neighboring nucleons, then the remaining bonds turn out to be weakened even at very close distances.
The strongest are nuclei with average mass numbers. In light nuclei, all or almost all nucleons lie on the surface of the nucleus, and therefore do not fully use their interaction capabilities, which somewhat reduces the specific binding energy. As the mass number increases, the proportion of nucleons lying inside the nucleus that fully utilize their capabilities increases, therefore the value of the specific binding energy gradually increases. With a further increase in the mass number, the mutual repulsion of the electric charges of protons begins to have an increasingly stronger effect, which tends to break the nucleus and therefore reduces the specific binding energy. This leads to the fact that all heavy nuclei are unstable.

Radioactivity

French physicist A.A. Becquerel (1852-1908) March 1, 1896 discovered the blackening of a photographic plate under the influence of invisible rays of strong penetrating power emitted by uranium salt. He soon discovered that uranium itself had the ability to emit radiation. Radioactivity(this is the name given to the discovered phenomenon) turned out to be the privilege of the heaviest elements of the periodic table. This phenomenon is defined as the spontaneous transformation of an unstable isotope of one element into an isotope of another, resulting in the emission of electrons, protons, neutrons or helium nuclei (alpha particles). It was found that radioactivity is a very common phenomenon.
Atomic nuclei, which differ in the number of neutrons and protons, have a common name - nuclides. Of the 1,500 known nuclides, only 265 are stable. Among the elements contained in the earth's crust, all those with serial numbers greater than 83 are radioactive, i.e., those located in the periodic table after bismuth. They have no stable isotopes at all ( isotopes- varieties of atoms of the same chemical element, differing in the number of neutrons in the nucleus). Natural radioactivity has been found in individual isotopes and other elements. Natural radioactive isotopes undergo decay, accompanied by the emission of alpha or beta particles (very rarely both).
In 1940, Soviet scientists G.N. Flerov and K.A. Petrzak discovered a new species. radioactive transformations - spontaneous nuclear fission. The emission of gamma rays does not lead to the transformation of elements and therefore is not considered a type of radioactive transformation. Thus, the number of ways of radioactive decay of natural isotopes is very limited.
However, other methods are now known. They were discovered or predicted after 1934. French physicists, spouses Irene (1897-1956) and Frederic (1900-1958) Joliot-Curie, observed the phenomenon of artificial radioactivity. As a result of nuclear reactions (for example, when various elements are irradiated with alpha particles or neutrons), radioactive isotopes that do not exist in nature are formed. I. and F. Joliot-Curie carried out a nuclear reaction, the product of which was a radioactive isotope of phosphorus with a mass number of 30. This type of transformation is called beta plus decay, meaning by beta minus the emission of an electron. During beta-plus decay, the nuclear charge decreases by 1. The same change occurs during so-called orbital capture: some nuclei can capture an electron from nearby shells. This is also a type of radioactive transformation. It is customary to combine beta plus, beta minus decays and epsilon capture under the general name beta decay. Theoretical physicists have predicted the possibility of a double beta transformation, in which two electrons or two positrons are emitted simultaneously. In practice, such a transformation has not yet been discovered. Proton and two-proton radioactivity was also observed. Only artificial isotopes that are not found in nature are subject to all these types of transformations.
Radioactivity is characterized not only by the type of particles emitted, but also by their energy, which can be millions of times greater than the energy of chemical processes. It is absolutely impossible to predict in advance the moment of decay for each individual nucleus. The kernel lifetime is a random variable. The rate of radioactive decay cannot be influenced by external factors - pressure, temperature, etc. The spontaneous nature of decay is one of its most important features.
Although all nuclei live for different times from the moment of formation to the moment of decay, for each radioactive substance there is a very definite average lifetime of nuclei. The decay rate obeys law of radioactive decay, expressed by the formula

where λ is the radioactive decay constant, N t is the number of undecayed nuclei at time t; N0 is the initial number of undecayed nuclei (at time t=0).

Chain reaction of fission of uranium nuclei

This reaction was discovered in 1939: it turned out that when one neutron hits a nucleus, it splits into two or three parts. When one nucleus fissions, about 200 MeV of energy is released. The kinetic energy of movement of the fragments takes about 165 MeV, the rest is carried away by gamma radiation (part of electromagnetic radiation with a very short wavelength) - a stream of photons. It can be calculated that with complete fission of 1 kg of uranium, 80,000 billion J will be released. This is several million times more than when burning 1 kg of coal or oil. It would be surprising not to use such energy.
In 1939, it was discovered that during the fission of uranium nuclei, in addition to fragments, 2-3 free neutrons are also released. Under favorable conditions, they can enter other uranium nuclei and cause their fission (Fig. 4.2).

Rice. 4.2. Nuclear chain reaction

The practical implementation of valuable reactions is complicated by certain circumstances. In particular, secondary neutrons are capable of causing the fission of only nuclei of the uranium isotope with a mass number of 235, but their energy is insufficient to destroy the nuclei of the uranium-238 isotope. Natural uranium contains approximately 0.7% uranium-235. A necessary condition for a chain reaction to occur is the presence of a sufficiently large amount of uranium-235, since in a small sample the majority of neutrons fly through without hitting any nucleus. The minimum (critical) mass for pure uranium-235 is several tens of kilograms.

Due to the fact that nuclear forces of attraction act between atomic nuclei at short distances, when two nuclei come closer together, their fusion is possible, i.e., the synthesis of a heavier nucleus. In order for the nuclei to overcome electrostatic repulsion and get closer, they must have sufficient kinetic energy. Accordingly, the easiest way is to synthesize light nuclei with a low electrical charge.
In nature, fusion reactions occur in very hot matter, for example in the interior of stars, where at a temperature of about 14 million degrees (the center of the Sun), the energy of thermal motion of some particles is sufficient to overcome repulsion. Nuclear fusion occurring in heated matter is called thermonuclear.
The peculiarity of thermonuclear reactions as a source of energy is its very large release per unit mass of reacting substances - 10 million times more than in chemical reactions. The entry into synthesis of 1 g of hydrogen isotopes is equivalent to the combustion of 10 tons of gasoline. In principle, today the energy of thermonuclear fusion can be obtained on Earth. It is possible to heat matter to stellar temperatures using the energy of an atomic explosion. This is how a hydrogen bomb works, where the explosion of a nuclear fuse leads to instantaneous heating of a mixture of deuterium and tritium and a subsequent thermonuclear explosion. However, this is an uncontrolled process.
Several conditions are required for controlled nuclear fusion to occur. First, it is necessary to heat the thermonuclear fuel to a temperature where fusion reactions can occur with a noticeable probability. Secondly, it is necessary that during fusion more energy is released than is expended on heating the substance (or, even better, that the fast particles being born themselves maintain the required temperature). This is possible provided there is good insulation.
The easiest way to carry out synthesis is between the heavy isotopes of hydrogen - deuterium and tritium (Fig. 4.3). Deuterium is found on Earth in huge quantities in seawater (1 atom per 6000 hydrogen atoms); Tritium can be produced artificially by irradiating lithium with neutrons.

Rice. 4.3.

For a thermonuclear reaction, the most favorable temperature is about 100 million degrees. As for the energy retention time, i.e., the quality of insulation, in this case the condition is as follows: a plasma with a density of 1014 ions per 1 cm3 should noticeably cool down in no faster than 1 second.
Plasma is kept from hitting the heat-insulating walls using magnetic fields that direct the flow of particles in a spiral closed in a ring. Due to the fact that plasma consists of ions and electrons, the magnetic field has a direct effect on it.
For heating, you can use the current flowing through the plasma “cord”. There are other heating methods - using high-frequency electromagnetic waves, beams of fast particles, light beams generated by lasers. The greater the power of the heating device, the faster the plasma can be heated to the required temperature. Recent developments make it possible to do this in such a short time that the substance has time to enter into a synthesis reaction before scattering due to thermal motion. In such conditions, additional thermal insulation is unnecessary. The only thing that keeps particles from flying apart is their own inertia. This direction - inertial thermonuclear fusion - has been developing rapidly recently.

4.8. Prospects for the development of microworld physics

Development of the theory

The latest achievements in elementary particle physics have clearly identified from the total number a group of particles - possible candidates for the role of truly elementary ones. Many issues, however, require further research. It is not known what the total number of leptons, quarks and various vector particles is and whether there are physical principles that determine it. The reasons for the division of particles with spin 1/2 into leptons and quarks and the origin of their internal quantum numbers are not entirely clear.
Modern theories assume that particles are point objects and that four-dimensional spacetime remains continuous and uncurved down to the smallest distances. In reality, these assumptions are apparently incorrect, since particles obviously must be material objects of finite extent, and space-time on scales of 10-33 cm changes its properties under the influence of gravity and forms something like quanta. Taking these circumstances into account opens the way to the creation of a unified theory of interaction.

Modern accelerators

The controlled beams of fast particles obtained in the accelerator turned out to be the only suitable tool for operations inside atoms and atomic nuclei, for studying the specifics and structure of nuclear particles. But this requires energy of tens, hundreds and even thousands of GeV (gigaelectron-volt; 1 GeV = 109 eV). So it is not by chance that the field of fundamental research into the structure of matter is called high energy physics. If accelerators designed for high energies were made linear according to the principle of a television tube, then, as calculations show, their dimensions would reach many hundreds of kilometers. Therefore, the accelerator is, as it were, rolled into a ring, forcing the particles to repeatedly pass through areas where the accelerating electric field operates. The higher the energy of the particles, the more difficult it is to wrap them, to send them along a circular path, the stronger the turning magnetic fields are needed for this. In addition, like-charged particles in the beam repel each other and are scattered on the remnants of the atmosphere in the vacuum tube of the accelerator. Therefore, along with turning magnets, focusing magnets are also needed, compressing particles into a thin beam. The maximum energy of modern accelerators is limited by a reasonable limit on the size and cost of the magnetic system, which is the most bulky and expensive.
A beam of particles generated by the accelerator (usually electrons or approximately 2000 times heavier protons) is directed to a target specially selected based on the objectives of the experiment, upon collision with which a variety of secondary particles are born. With the help of fairly complex systems - detectors - these particles are recorded, their mass, electric charge, speed and many other characteristics are determined. Then, through complex mathematical processing of this information on a computer, the trajectory of motion and the entire picture of the interaction of the accelerated particle with the target matter are restored. And finally, by comparing the measurement results with the preliminary calculation, conclusions are drawn about the parameters of the theoretical interaction model. It is here that new knowledge about the properties of intranuclear particles is obtained. It may very well be that it is the knowledge that high-energy physics will give us that will make it possible to create a new energy industry - the energy industry of the 21st century, which will put an end to the total destruction of the resources of our planet.
Instead of a stationary target, a counter-accelerated particle beam can also be used. This makes it possible, with appropriate arrangement of accelerators, to use the energy of their beams much more efficiently. These most modern colliding beam accelerators are called colliders. There are only a few colliders in the world so far. They are located in the USA, Japan, Germany, as well as the European Organization for Nuclear Research (CERN), based in Switzerland. Our country has also been among the leaders in the development and construction of accelerators and, accordingly, in high-energy physics for many years. Thus, in particular, the synchrophasotron in Dubna, built in 1956 (energy 10 GeV, particle orbital length about 200 m, ring electromagnet weight 40 thousand tons), was at one time “world record holders” in the energy of accelerated protons and in size. and then - a synchrotron built in 1967 in the city of Protvino near Serpukhov (energy 70 GeV, orbital length 1.5 km, electromagnet weight 22 thousand tons). Using these devices, a number of fundamental results were obtained and several discoveries were made. For example, antimatter nuclei were detected for the first time and the so-called “Serpukhov effect” was discovered - an increase in the total cross sections of hadronic interactions (values ​​that determine the course of the reaction of two colliding particles) and much more.
The U-70 accelerator of the Institute of High Energy Physics in Protvino remains the largest in Russia to this day. It conducts physics research from many laboratories in our country and the CIS countries, and carries out a number of joint physics programs with the West. During its reconstruction, for the initial stage of acceleration, the world's first linear accelerator with high-frequency focusing, without magnets, was installed, and an “intermediate” synchrotron with an energy of 1.5 GeV with a diameter of 30 m was put into operation. As a result of this modernization, the intensity of the proton beam (roughly speaking, the number of particles in the beam) was increased by an order of magnitude, which made it possible to maintain the interest of physicists in the domestic research program even after the appearance of more powerful accelerators abroad. At the same time, a project was developed for a new accelerator UNK (accelerator-storage complex), which for a long time could become the most powerful in the world and attract the best forces of the world physical society. Already in 1983, after the adoption of the corresponding government decision, work began in Protvino on the construction of the UNK, which ultimately was supposed to provide an energy of 3000 GeV - this is three times the energy of the most powerful accelerator in the world at the E. Fermi Laboratory (FNAL) in the USA .
For UNK, a ring tunnel 21 km long and about 5 m in diameter was dug (in size it is comparable to the ring line of the Moscow metro). They planned to install superconducting magnets in it, which have already been tested. However, with the collapse of the USSR, economic ties were interrupted, and the plant in Ust-Kamenogorsk, where the superconductor was produced, turned out to be foreign. It was decided to launch the first stage of the new installation using conventional magnets, which would provide an energy of 600 GeV (it was called U-600). To do this, it is necessary to install more than two thousand magnets weighing about 10 tons each along the ring, which is estimated at approximately $150 million and is only a small part of the funds already invested. In 1997, Minatom leaders proposed speeding up the work and completing it in three years.
In ten years, the construction of the world's largest charged particle accelerator will be completed - the Large Hadron Collider (LHC) in Geneva, in a 27-kilometer underground tunnel on the border between Switzerland and France. Physicists hope that with energies of colliding particles unimaginable today (about 10 trillion electron volts), it will be possible to finally obtain the information that is still missing about the deep mechanisms of their interaction inside the nucleus and build a consistent picture of the universe. In addition, new knowledge will certainly provide new ways to satisfy the “energy appetites” of humanity without the total destruction of the earth’s resources - a necessary and noble task.
Russia, in the understanding of European scientists, has a unique scientific and technical culture, the importance of which in the general global process of knowledge can hardly be overestimated. A decrease in its level, and even more so its loss, would be a heavy blow to the progress of mankind, and therefore cooperation with Russian scientists must continue and strengthen for the benefit of both parties.
Russian physics will be represented quite adequately in the LHC program. We are talking not only about the construction of the accelerator itself on superconducting magnets, but also about the creation of grandiose experimental equipment. The accelerator itself is only the “locomotive” of scientific research, and the entire “payload” is delivered by particle and radiation detectors. At a large accelerator, the size of the detectors is amazing. One of them - the largest, called ATLAS by the designers - is an underground cylinder with a length of 26 and a diameter of 20 m, a total weight of 7 thousand tons, with the most complex equipment.
When creating the ATLAS detector and conducting experiments on it, an international team of one and a half thousand people from three dozen countries was formed. And this is not just about the scale of the installation. The new physics differs from the old one more than a factory assembly line from a handicraft workshop. Suffice it to say that ATLAS will begin to produce a data stream equivalent to the information circulating today in all European computer networks.
According to one of the Greek myths, Atlas was a titan who had to hold the sky on his shoulders as punishment for disobedience to the gods of Olympus. Continuing the parallel, we can say that the Geneva ATLAS is called upon to strengthen and support with its powerful efforts the entire edifice of modern physics. But this is not a punishment, but the fruit of the joint creativity of many scientists from many countries and the basis for the prosperous existence of all those people who are far from science, but enjoy its fruits.

Structural neutronography

In an effort to penetrate deep into matter and study its structure, researchers created increasingly effective tools and methods. The optical microscope has been replaced by an electron microscope with an incomparably higher resolution. X-ray diffraction analysis made it possible to “see” the shape of the atomic lattice of a crystal and even monitor its changes under the influence of external conditions, for example, when changing temperature and pressure. Relatively recently, new methods for studying matter have been created, developed and improved, based on the scattering of neutrons in it.
The neutron, like any other particle, also has the properties of a wave. Therefore, the neutron flux can be considered as very short-wave radiation (characteristic wavelength is about 0.03 nm, or 0.3 angstroms). Passing through matter, neutrons experience diffraction - scattering on individual atoms, in which additional deflected fluxes arise from the initial beam of particles. Their direction and intensity depend on the structure of the scattering object. In a crystal, for example, one can distinguish a set of regular atomic layers - crystallographic planes, upon reflection from which neutron fluxes change intensity. Intensity maxima occur in directions where an integer number of their waves fits into the difference between the paths of two reflected beams. This condition for wave scattering by a crystal was discovered in 1913. English physicist W.L. Bragg (1890-1971) and Soviet G.V. Wulff (1863-1925) for x-rays; it is also true for any other waves. By measuring neutron scattering angles, it is possible to reconstruct the atomic structure of a substance.
For fundamental work on the study of the laws of neutron scattering and for the creation of fundamentally new methods for studying liquids and solids - structural neutronography - the Royal Swedish Academy of Sciences awarded the Nobel Prize in Physics in 1994 to the American physicist Clifford Schall and the Canadian researcher Bertr Brockhaus.
Structural neutron diffraction allows you to monitor the behavior of each atom. In Fig. Figure 4.4 shows the projection of elastic neutron scattering in a KH2PO4 crystal near the O-H-O hydrogen bond. Two oxygen atoms (solid lines) and two hydrogen atoms (dashed lines) are visible. The distance between them at room temperature (293K) is about 10-12 cm (Fig. 4.4a). A decrease in temperature to 77 K caused a phase transition, in which one hydrogen atom approached the oxygen atom by 0.37 × 10-12 cm (Fig. 4.4b).


Rice. 4.4. The picture of elastic neutron scattering at
room (a) and low (b) temperatures

Neutronography- one of the largest achievements of nuclear physics in recent times. It opens up wide possibilities for microscopic studies of a variety of not only physical, but also chemical and biological objects. Such a multifaceted application of neutronography, essentially a physical method, indicates the close relationship between various branches of modern natural science: physics, chemistry, biology.

Control questions

1. Give a brief description of Thomson’s atomic model.
2. Describe the planetary model of the atom.
3. Is it possible to explain the atomic structure of all elements of the periodic table using Bohr’s theory?
4. What is the essence of the uncertainty principle?
5. Formulate the principle of complementarity.
6. Who and when formulated the basic equation of nonrelativistic quantum mechanics?
7. What is the principle of causality for microprocesses?
8. Name the main characteristics of elementary particles.
9. Who predicted the existence of antiparticles and when?
10. What particles does the atomic nucleus consist of?
11. Who proposed the quark hypothesis and when?
12. Give a brief description of the uranium fission chain reaction.
13. Describe thermonuclear fusion. At what temperature does it occur?
14. Give the characteristics of modern accelerators.
15. What is the essence of structural neutronography?

The famous American scientist, twice Nobel Prize laureate Linus Pauling, in his book “General Chemistry” (Moscow: Mir, 1974), writes that “the greatest help to any student of chemistry, first of all, will be a good knowledge of the structure of the atom.” The discovery of the particles that make up an atom and the study of the structure of atoms (and then molecules) is one of the most interesting pages in the history of science. Knowledge of the electronic and nuclear structure of atoms made it possible to carry out an extremely useful systematization of chemical factors, which facilitated the understanding and study of chemistry.

Development of ideas about the complex structure of the atom

The first indications of the complex structure of the atom were obtained during the study of the processes of passage of electric current through liquids and gases. Experiments of the outstanding English scientist M. Faraday in the thirties of the 19th century. suggested that electricity exists in the form of separate unit charges.

The magnitudes of these unit charges of electricity were determined in later experiments on passing electric current through gases (experiments with so-called cathode rays). It was found that cathode rays are a stream of negatively charged particles called electrons.

Direct evidence of the complexity of the structure of the atom was the discovery of the spontaneous disintegration of atoms of some elements, called radioactivity. In 1896, the French physicist A. Becquerel discovered that materials containing uranium illuminate a photographic plate in the dark, ionize gases, and cause fluorescent substances to glow. Later it turned out that not only uranium has this ability.

The titanic efforts associated with the processing of huge masses of uranium resin ore allowed P. Curie and M. Sklodowska-Curie to discover two new radioactive elements: polonium and radium. The subsequent establishment of the nature and -rays formed during radioactive decay (E. Rutherford, 1899-1903), the discovery of nuclei of atoms with a diameter of nm, occupying a small fraction of the volume of the atom (E. Rutherford, 1909-1911), determination of charge electron (R. Millikan, 1909-1914) and proof of the discreteness of its energy in the atom (J. Frank, G. Hertz, 1912), the discovery of the fact that the charge of the nucleus is equal to the number of the element (G. Moseley, 1913 .), and finally, the discovery of the proton (E. Rutherford, 1920) and the neutron (J. Chadwick, 1932) made it possible to propose the following model of the structure of the atom:

1. At the center of the atom there is a positively charged nucleus, occupying an insignificant part of the space inside the atom.

2. All the positive charge and almost all the mass of an atom are concentrated in its nucleus (the mass of an electron is 1/1823 amu).

3. The nuclei of atoms consist of protons and neutrons (generally called nucleons). The number of protons in the nucleus is equal to the atomic number of the element, and the sum of the numbers of protons and neutrons corresponds to its mass number.

4. Electrons rotate around the nucleus. Their number is equal to the positive charge of the nucleus (see Table 2.1).

Table 2.1. Properties of elementary particles that form an atom

Different types of atoms have a common name - nuclides. It is enough to characterize nuclides by any two numbers from three fundamental parameters: A - mass number, Z - nuclear charge equal to the number of protons, and N - number of neutrons in the nucleus.

These parameters are interconnected by the following relationships:

Nuclides with the same Z but different A and N are called isotopes.

This model of atomic structure is called the Rutherford planetary model. It turned out to be very clear and useful for explaining many experimental data. But this model immediately revealed its shortcomings. In particular, an electron, moving around a nucleus with acceleration (it is acted upon by a centripetal force), should, according to electromagnetic theory, continuously emit energy. This would lead to an imbalance between the electron and the nucleus. The electron, gradually losing its energy, would have to move around the nucleus in a spiral and eventually inevitably fall onto it. There was no evidence that atoms were continuously disappearing (all observed phenomena indicate just the opposite), which meant that Rutherford's model was somehow flawed.

Bohr's theory.

In 1913, the Danish physicist N. Bohr proposed his theory of the structure of the atom. At the same time, Bohr did not completely discard the old ideas about the structure of the atom: like Rutherford, he believed that electrons move around the nucleus like planets moving around the Sun, but the new theory was based on two unusual assumptions (postulates):

1. An electron can rotate around the nucleus not in arbitrary, but only in strictly defined (stationary) circular orbits. The orbital radius r and the electron velocity v are related by Bohr’s quantum relation:

where m is the electron mass, n is the orbital number, and is Planck’s constant J c).

2. When moving along these orbits, the electron does not emit or absorb energy.

Thus, Bohr suggested that the electron in an atom does not obey the laws of classical physics. According to Bohr, the emission or absorption of energy is determined by the transition from one state, for example, with energy to another - with energy that corresponds to the transition of an electron from one stationary orbit to another. During such a transition, energy is emitted or absorbed, the magnitude of which is determined by the relation

where v is the radiation frequency, .

Bohr, using equation (2.3), calculated the frequencies of the lines in the spectrum of the hydrogen atom, which agreed very well with the experimental values. The same agreement between theory and experiment was obtained for many other atoms of the elements, but it was also discovered that for complex atoms Bohr's theory did not give satisfactory results. After Bohr, many scientists tried to improve his theory, but all improvements were proposed based on the same laws of classical physics.

Quantum theory of atomic structure.

In subsequent years, some provisions of Bohr's theory were rethought, modified, and supplemented. The most significant innovation was the concept of the electron cloud, which replaced the concept of the electron only as a particle. Bohr's theory was replaced by the quantum theory of atomic structure, which takes into account the wave properties of the electron.

The modern theory of atomic structure is based on the following basic principles:

1. The electron has a dual (particle-wave) nature. It can behave both as a particle and as a wave: like a particle, an electron has a certain mass and charge; at the same time, a moving stream of electrons exhibits wave properties, for example, it is characterized by the ability to diffraction.

The electron wavelength X and its speed v are related by the de Broglie relation:

where is the mass of the electron.

2. For an electron, it is impossible to accurately measure the position and velocity at the same time. The more accurately we measure the speed, the greater the uncertainty in the coordinate, and vice versa. The mathematical expression of the uncertainty principle is the relation

where is the uncertainty of the coordinate position, and is the error in measuring the speed.

3. An electron in an atom does not move along certain trajectories, but can be located in any part of the circumnuclear space, however, the probability of its being in different parts of this space is not the same. The space around the nucleus in which the probability of finding an electron is quite high is called an orbital.

These provisions constitute the essence of a new theory describing the movement of microparticles - quantum mechanics. The greatest contribution to the development of this theory was made by the Frenchman L. de Broglie, the German W. Heisenberg, the Austrian E. Schrödinger and the Englishman P. Dirac.

Quantum mechanics has a very complex mathematical apparatus, so now we are only interested in those consequences of quantum mechanical theory that will help us understand the structure of the atom and molecule, the valence of elements, etc. From this point of view, the most important consequence of quantum mechanics is that the entire set of complex movements of an electron in an atom is described by five quantum numbers: the main n, the secondary I, the magnetic spin s and the spin projection. What are quantum numbers?

Quantum numbers of electrons.

The principal quantum number n determines the total energy of an electron in a given orbital. It can take any integer value, starting from one). By the principal quantum number, it is meant that the electron is given energy sufficient for its complete separation from the nucleus (ionization of the atom).

In addition, it turns out that within certain energy levels, electrons can differ in their energy sublevels. The existence of differences in the energy state of electrons belonging to different sublevels of a given energy level is reflected by the side (sometimes called orbital) quantum number l. This quantum number can take integer values ​​from 0 to . Typically, numerical values ​​of l are usually denoted by the following alphabetic symbols:

In this case, we talk about -states of electrons, or -orbitals.

Orbital is a set of positions of an electron in an atom, i.e. region of space in which an electron is most likely to be found.

The side (orbital) quantum number l characterizes the different energy states of electrons at a given level, determines the shape of the electron cloud, as well as the orbital momentum p - the angular momentum of the electron as it rotates around the nucleus (hence the second name for this quantum number - orbital)

Thus, an electron, having the properties of a particle and a wave, most likely moves around the nucleus, forming an electron cloud, the shape of which is different in the S-, p-, d-, g-states.

Let us emphasize once again that the shape of the electron cloud depends on the value of the side quantum number l.

So, if (-orbital), then the electron cloud has a spherical shape (spherical symmetry) and has no directionality in space (Fig. 2.1).

To fully explain all the properties of the atom, in 1925 a hypothesis was put forward that the electron has a so-called spin (at first, in the simplest approximation - for clarity - it was believed that this phenomenon was similar to the rotation of the Earth around its axis as it moves in orbit around the Sun). Spin is a purely quantum property of an electron that has no classical analogues. Strictly speaking, spin is the electron's own angular momentum, not associated with motion in space. For all electrons, the absolute value of the spin is always equal to The projection of the spin onto the r axis (magnetic spin number) can have only two values: or.

Since the electron spin s is a constant quantity, it is usually not included in the set of quantum numbers that characterize the motion of an electron in an atom, and they speak of four quantum numbers.

The discovery of the complex structure of the atom is the most important stage in the development of modern physics. In the process of creating a quantitative theory of atomic structure, which made it possible to explain atomic systems, new ideas were formed about the properties of microparticles, which are described by quantum mechanics.
The idea of ​​atoms as indivisible smallest particles of substances, as noted above, arose in ancient times (Democritus, Epicurus, Lucretius). In the Middle Ages, the doctrine of atoms, being materialistic, did not receive recognition. By the beginning of the 18th century. atomic theory is gaining increasing popularity. By this time, the works of the French chemist A. Lavoisier (1743–1794), the great Russian scientist M.V. Lomonosov and the English chemist and physicist D. Dalton (1766–1844) proved the reality of the existence of atoms. However, at this time the question of the internal structure of atoms did not even arise, since atoms were considered indivisible.
A major role in the development of atomic theory was played by the outstanding Russian chemist D.I. Mendeleev, who in 1869 developed the periodic system of elements, in which for the first time the question of the unified nature of atoms was raised on a scientific basis. In the second half of the 19th century. It has been experimentally proven that the electron is one of the main parts of any substance. These conclusions, as well as numerous experimental data, led to the fact that at the beginning of the 20th century. The question of the structure of the atom seriously arose.
The existence of a natural connection between all chemical elements, clearly expressed in Mendeleev’s periodic system, suggests that the structure of all atoms is based on a common property: they are all closely related to each other.
However, until the end of the 19th century. In chemistry, the metaphysical conviction prevailed that the atom is the smallest particle of simple matter, the final limit of the divisibility of matter. During all chemical transformations, only molecules are destroyed and created again, while atoms remain unchanged and cannot be split into smaller parts.
For a long time, various assumptions about the structure of the atom were not confirmed by any experimental data. Only at the end of the 19th century. discoveries were made that showed the complexity of the structure of the atom and the possibility of transforming some atoms into others under certain conditions. Based on these discoveries, the doctrine of the structure of the atom began to develop rapidly.
The first indirect evidence of the complex structure of atoms was obtained from the study of cathode rays generated during an electrical discharge in highly rarefied gases. The study of the properties of these rays led to the conclusion that they are a stream of tiny particles carrying a negative electrical charge and flying at a speed close to the speed of light. Using special techniques, it was possible to determine the mass of cathode particles and the magnitude of their charge, and to find out that they do not depend either on the nature of the gas remaining in the tube, or on the substance from which the electrodes are made, or on other experimental conditions. Moreover, cathode particles are known only in a charged state and cannot be stripped of their charges and converted into electrically neutral particles: electric charge is the essence of their nature. These particles, called electrons, were discovered in 1897 by the English physicist J. Thomson.
The study of the structure of the atom practically began in 1897–1898, after the nature of cathode rays as a stream of electrons was finally established and the charge and mass of the electron were determined. Thomson suggested first atomic model, imagining the atom as a clump of matter with a positive electrical charge, in which so many electrons are interspersed that it turns it into an electrically neutral formation. In this model, it was assumed that, under the influence of external influences, electrons could oscillate, i.e., move at an accelerated rate. It would seem that this made it possible to answer questions about the emission of light by atoms of matter and gamma rays by atoms of radioactive substances.
Thomson's model of the atom did not assume positively charged particles inside an atom. But how then can we explain the emission of positively charged alpha particles by radioactive substances? Thomson's atomic model did not answer some other questions.
In 1911, the English physicist E. Rutherford, while studying the movement of alpha particles in gases and other substances, discovered a positively charged part of the atom. Further more thorough studies showed that when a beam of parallel rays passes through layers of gas or a thin metal plate, no longer parallel rays emerge, but somewhat diverging ones: alpha particles are scattered, i.e., they deviate from the original path. The deflection angles are small, but there are always a small number of particles (about one in several thousand) that are deflected very strongly. Some particles are thrown back as if they had encountered an impenetrable barrier. These are not electrons - their mass is much less than the mass of alpha particles. Deflection can occur when colliding with positive particles whose mass is of the same order as the mass of alpha particles. Based on these considerations, Rutherford proposed the following diagram of the structure of the atom.
At the center of the atom there is a positively charged nucleus, around which electrons rotate in different orbits. The centrifugal force arising during their rotation is balanced by the attraction between the nucleus and the electrons, as a result of which they remain at certain distances from the nucleus. Since the mass of an electron is negligible, almost the entire mass of an atom is concentrated in its nucleus. The share of the nucleus and electrons, the number of which is relatively small, accounts for only an insignificant part of the total space occupied by the atomic system.
The structure of the atom proposed by Rutherford or, as they usually say, planetary atomic model, easily explains the phenomena of alpha particle deflection. Indeed, the size of the nucleus and electrons is extremely small compared to the size of the entire atom, which is determined by the orbits of the electrons farthest from the nucleus, so most alpha particles fly through atoms without noticeable deflection. Only in cases where the alpha particle comes very close to the nucleus does electrical repulsion cause it to deviate sharply from its original path. Thus, the study of the scattering of alpha particles laid the foundation for the nuclear theory of the atom.

Bohr's postulates

The planetary model of the atom made it possible to explain the results of experiments on the scattering of alpha particles of matter, but fundamental difficulties arose in justifying the stability of atoms.
The first attempt to construct a qualitatively new – quantum – theory of the atom was made in 1913 by Niels Bohr. He set the goal of linking into a single whole the empirical laws of line spectra, the Rutherford nuclear model of the atom, and the quantum nature of the emission and absorption of light. Bohr based his theory on Rutherford's nuclear model. He suggested that electrons move around the nucleus in circular orbits. Circular motion, even at constant speed, has acceleration. This accelerated movement of charge is equivalent to alternating current, which creates an alternating electromagnetic field in space. Energy is consumed to create this field. The field energy can be created due to the energy of the Coulomb interaction of the electron with the nucleus. As a result, the electron must move in a spiral and fall onto the nucleus. However, experience shows that atoms are very stable formations. It follows from this that the results of classical electrodynamics, based on Maxwell’s equations, are not applicable to intra-atomic processes. It is necessary to find new patterns. Bohr based his theory of the atom on the following postulates.
Bohr's first postulate (postulate of stationary states): in an atom there are stationary (not changing with time) states in which it does not emit energy. Stationary states of an atom correspond to stationary orbits along which electrons move. The movement of electrons in stationary orbits is not accompanied by the emission of electromagnetic waves.
This postulate is in conflict with the classical theory. In the stationary state of an atom, an electron, moving in a circular orbit, must have discrete quantum values ​​of angular momentum.
Bohr's second postulate (frequency rule): when an electron moves from one stationary orbit to another, one photon with energy is emitted (absorbed)

equal to the difference between the energies of the corresponding stationary states (En and Em are, respectively, the energies of the stationary states of the atom before and after radiation/absorption).
The transition of an electron from a stationary orbit number m to a stationary orbit number n corresponds to the transition of an atom from a state with energy Em into a state with energy En (Fig. 4.1).

Rice. 4.1. To an explanation of Bohr's postulates

At En > Em, photon emission occurs (the transition of an atom from a state with higher energy to a state with lower energy, i.e., the transition of an electron from an orbit more distant from the nucleus to a closer one), at En< Еm – его поглощение (переход атома в состояние с большей энергией, т. е, переход электрона на более удаленную от ядра орбиту). Набор возможных дискретных частот

quantum transitions and determines the line spectrum of an atom.
Bohr's theory brilliantly explained the experimentally observed line spectrum of hydrogen.
The successes of the theory of the hydrogen atom were achieved at the cost of abandoning the fundamental principles of classical mechanics, which has remained unconditionally valid for more than 200 years. Therefore, direct experimental proof of the validity of Bohr’s postulates, especially the first one – about the existence of stationary states – was of great importance. The second postulate can be considered as a consequence of the law of conservation of energy and the hypothesis about the existence of photons.
German physicists D. Frank and G. Hertz, studying the collision of electrons with gas atoms using the retarding potential method (1913), experimentally confirmed the existence of stationary states and the discreteness of atomic energy values.
Despite the undoubted success of Bohr's concept in relation to the hydrogen atom, for which it turned out to be possible to construct a quantitative theory of the spectrum, it was not possible to create a similar theory for the helium atom next to hydrogen based on Bohr's ideas. Regarding the helium atom and more complex atoms, Bohr's theory allowed us to draw only qualitative (albeit very important) conclusions. The idea of ​​certain orbits along which an electron moves in a Bohr atom turned out to be very conditional. In fact, the movement of electrons in an atom has little in common with the movement of planets in orbit.
Currently, with the help of quantum mechanics, it is possible to answer many questions regarding the structure and properties of atoms of any elements.


Related information.