-
10 votes
-
University of Chicago undergraduate physics bibliography
7 votes -
Albert Einstein's relativity document gifted to Nobel museum
4 votes -
Quantum computing is a marathon, not a sprint
5 votes -
Murray Gell-Mann, who peered at particles and saw the universe, dies at 89
8 votes -
The sky is blue. Why isn't everything blue?
14 votes -
Quantum computing for the very curious
6 votes -
Robert R. Wilson's congressional testimony in favor of building a particle collider at Fermilab, April 1969
5 votes -
Thorium Energy Conference 2018 - discussion of Molten Salt Reactor concepts and the new nuclear industry
12 votes -
New studies confirm existence of galaxies with almost no dark matter
10 votes -
LHCb sees a new flavour of matter–antimatter asymmetry
11 votes -
What causes rare rainbow arcs?
3 votes -
The sideways tide
4 votes -
Animation of Earth's magnetic pole shift
7 votes -
Is the Earth flat?
11 votes -
Visualization of quantum physics (quantum mechanics)
4 votes -
What does any of this have to do with physics?
14 votes -
Earth's magnetic pole is wandering, lurching toward Siberia
6 votes -
Physicists stimulate Hawking radiation from optical analogue of a black hole
6 votes -
CERN reveals plans for the Future Circular Collider (FCC) - almost four times longer than the current LHC
19 votes -
'Sonic attack' or just crickets? New analysis shows recording of 'attack' on US embassy was Caribbean wildlife
7 votes -
Amoeba finds approximate solutions to NP-hard problem in linear time
11 votes -
Quantum physics in a mirror universe
4 votes -
Scientists at the University of Oxford unifying dark matter and dark energy into a single phenomenon: a fluid which possesses 'negative mass"
27 votes -
Successful second round of experiments with Wendelstein 7-X
22 votes -
A break in the quest for the quantum speed limit
4 votes -
A layperson's introduction to Thermodynamics, part 3: Entropy and the heat death of the universe
Intro Hello everyone, Today we cover entropy and the heat death of the universe. The previous chapters can be found here and here. While I recommend you read both, you should at least read the...
Intro
Hello everyone,
Today we cover entropy and the heat death of the universe.
The previous chapters can be found here and here. While I recommend you read both, you should at least read the first part and skim the second.
A collection of all topics covered can be found here: https://tildes.net/~tildes/8al/.
Subject
Intro
Entropy describes how chaotic a system is. In thermodynamics, chaos is created from an irreversible process. We are all sort of familiar with this concept. A broken cup will not unshatter itself. As a consequence of how our universe works, (net) chaos can only increase. And this might have far reaching consequence, if we look at the effects of entropy on a cosmic scale.
Entropy
Entropy describes an amount of irreversible chaos.
But first, let's cover cycles super quickly. In thermodynamics, a very important concept is a "cycle". A cycle is a repeating process, that returns to its initial condition. For instance, when we ride a bike. We're turning our feet around the crank shaft. Repeatedly returning to the same position we started from. As we push on the pedal, some of our work is lost and turned into heat. Primarily due to friction from the wheels and from the different mechanical parts.
A cycle that wastes no energy is called a reversible cycle. That would mean 100% of the work in a cycle (even the work that is turned to heat) has to be returned in some way to its original state. The most famous example of this is the Carnot heat engine.[1] But in reality, the Carnot heat engine is nothing more than a theoretical engine. As we remember from before, we cannot turn 100% of heat back into work. So any heat engine, be it a car's motor, a refrigerator, a star, or the human body, will in some way contribute to this irreversible chaos.
Now what about entropy? If we look at entropy at the molecular level, it all becomes a bit abstract. But we can think of this concept with bigger building blocks than molecules, and still be close enough. Say you have a brick house with orderly layed bricks. This house would love to come crashing down. And lets imagine it does. When the house lays in ruins, it is not likely to suddenly "fall" into the shape of the house again. So if the house has collapsed, our system is in a higher state of chaos. Our entropy has increased. And unless we supply work to the system (and waste energy trough heat), we will not get the brick house back.
So now we understand, that on the grand scale of the universe, entropy will only increase.
The heat death of the universe
But what are the consequences of this? Imagine entropy going on for billions and billions of years. Everything in the universe slowly reaching a higher state of chaos. Everything that is orderly, turns into chaos. All high quality energy has turned into low quality energy. Everything has been wasted and turned into heat. Everything ripped apart until you are left with nothing to rip apart. At this point, there is no interactions between molecules any more. Everything has reached absolute zero temperature.
At this point, entropy is at its absolute maximum. And we have reached entropic equilibrium.
This is the heat death of the universe.
Afterword
Of course, the heat death of the universe is just one of the many theories about the end of the universe. It assumes that thermodynamics properly describes the universe, and that there are no hidden surprises.
Frankly told, it's the best bet we have with our current knowledge. But we still know so little. So I would not panic just yet. Alternatively, this is where we could continue with "an engineer's perspective on existensial nihilism". But I think that this is something better reserved for later, and better presented by someone else.
We have covered what I consider the absolute minimum of thermodynamics, that still gives us a basic understanding of thermodynamics. There are of course a lot of other topics we could cover, but thats it for now. I will potentially write an appendix later with some questions or things that have been asked.
But for now, that's it. Questions, feedback or otherwise?
Notes
[1] The Carnot heat cycle is a bit beyond the level of what we have discussed so far. It describes a system where heat is supplied and removed to have a piston expand and contract without any energy becoming waste heat.
14 votes -
A layperson's introduction to the nature of light and matter, part 1
Introduction I want to give an introduction on several physics topics at a level understandable to laypeople (high school level physics background). Making physics accessible to laypeople is a...
Introduction
I want to give an introduction on several physics topics at a level understandable to laypeople (high school level physics background). Making physics accessible to laypeople is a much discussed topic at universities. It can be very hard to translate the professional terms into a language understandable by people outside the field. So I will take this opportunity to challenge myself to (hopefully) create an understandable introduction to interesting topics in modern physics. To this end, I will take liberties in explaining things, and not always go for full scientific accuracy, while hopefully still getting the core concepts across. If a more in-depth explanation is wanted, please ask in the comments and I will do my best to answer.
Previous topics
Bookmarkable meta post with links to all previous topics
Today's topic
Today's topic is the dual nature of light and matter, the wave-particle duality. It is a central concept in quantum mechanics that - as is tradition - violates common sense. I will first discuss the duality for light and then, in the next post, for matter.
The dual nature of light
In what terms can we think of light so that its behaviour becomes understandable to us? As waves? Or as particles? There are arguments to be made for both. Let's look at what phenomena we can explain if we treat light as a wave.
The wave nature of light
Let's start with an analogy. Drop two stones in a pond, imagine what happens to the ripples in the pond when they meet each other. They will interact, when two troughs meet they amplify each other, forming a deeper trough. When two crests meet they do the same. When a crest and a trough meet they cancel out.
Now if we shine light through two small openings and observe the resulting pattern, we see it's just like ripples in a pond, forming an interference pattern. When looking at the pattern formed on a screen placed at some distance from the openings, we see a striped pattern Light can be described as an electromagnetic wave, with crests and troughs. It sure seems like light is wavey! The wave nature of light allows us to describe phenomena like refraction and diffraction.
The particle nature of light
When we shine light on some metals, they will start tossing out electrons. This is called the photoelectric effect. How can we understand this process? Well we know light is a wave, so we imagine that the wave crashes into the electron that is chilling out near the surface of the metal. Once the electron has absorbed enough of the light's energy it will be able to overcome the attractive forces between itself and the positively charged atom core (remember, an electron has negative charge and so is attracted to the atom cores). So a higher intensity of light should make the electron absorb the required amount of energy more quickly. Easy, done!
However, there's something very peculiar going on with the photoelectric effect. If we shine low frequency light on said metal, no matter how intense the light, not a single electron will emerge. Meanwhile if we shine very little high frequency light on the metal, no matter how low the intensity, the electron will emerge. But how can this be? A higher intensity of light should mean the electron is receiving more energy. Why does frequency enter into this?
It seems that the electron needs a single solid punch in order to escape the metal. In other words, it seems it needs to be hit by something like a microscopic billiard ball that will punch it out of the metal in one go. The way physicists understand this is by saying light is made up out of particles called photons, and that the energy a photon carries is linked to its frequency. So, now we can understand the photoelectric effect! When the frequency is high enough, the photons in the light beam all individually carry enough energy to convince an electron to leave the metal. When the frequency is too low, none of the photons individually can knock an electron out of the metal. So even if we fire a single photon, with high enough frequency, at the metal we will see one electron emerging. If we shine low frequency light with a super high intensity at the metal, not a single photon will emerge.
So there you have it! Light is made out of particles. Wait, what? You just told us it's made out of electromagnetic waves!
The wave-particle duality of light
So, maybe light is just particles and the wave are some sort of emerging behaviour? This was a popular idea, one that Einstein held for some time. Remember the experiment where we shone light through two small openings and saw interference (commonly known as the double slit experiment)? Let's just take a single photon and shoot it at the openings! Because light is particles we'll see the photon just goes through either opening - like a particle would. Then all the non-believers will have to admit light is made out of particles! However, when we do the experiment we see the photon interfere with itself, like it was a wave. Remember this picture which we said was due to wave interference of light? When a single photon goes through the openings, it will land somewhere on the screen, but it can only ever land in an area where the light waves wouldn't cancel out. If we shoot a bunch of photons through the openings one at a time, we will see that the photons create the same pattern as the one we said is due to wave interference!
Implications
So it would seem light acts like a particle in some cases, but it acts like a wave in some others. Let's take a step back and question these results. Why are we trying to fit light into either description? Just because it's convenient for us to think about things like waves and particles - we understand them intuitively. But really, there is no reason nature needs to behave in ways we find easy to understand. Why can't a photon be a bit wavey and a bit particley at the same time? Is it really that weird, or is it just our intuition being confused by this world we have no intuitive experience with? I would love to hear your opinions in the comments!
Observing photons
To add one final helping of crazy to this story; if we measure the photon's location right after it emerges from the slit we find that it doesn't interfere with itself and that it just went through a single slit. This links back to my previous post where I described superpositions in quantum mechanics. By observing the photon at the slits, we collapsed its superposition and it will behave as if it's really located at one spot, instead of being somehow spread out like a wave and interacting with itself. The self interaction is a result of its wavefunction interacting with itself, a concept that I will explain in the next post.
Conclusion
We learned that light cannot be described fully by treating it simply as a wave or simply as a bunch of particles. It seems to be a bit of both - but neither - at the same time. This forces us to abandon our intuition and accept that the quantum world is just fundamentally different from our every day life.
Next time
Next time we will talk about the dual nature of matter and try to unify the wave and particle descriptions through a concept known as the wavefunction.
Feedback
As usual, please let me know where I missed the mark. Also let me know if things are not clear to you, I will try to explain further in the comments!
Addendum
The photoelectric effect is actually what gave Einstein his Nobel prize! Although he is famous for his work on relativity theory he was very influential in the development of quantum mechanics too.
21 votes -
A layperson's introduction to Thermodynamics, part 2: Equilibrium, phase changes and steam engines
Intro Hello everyone, Today we cover equilibriums and phase changes. Through that we will get a basic understanding of how things like pressure, temperature, density, volume, etc. are related. The...
Intro
Hello everyone,
Today we cover equilibriums and phase changes. Through that we will get a basic understanding of how things like pressure, temperature, density, volume, etc. are related.
The previous chapter can be found here: https://tildes.net/~science/8ao/. I highly recommend that you read it before continuing.
A collection of all topics covered can be found here: https://tildes.net/~tildes/8al/.
Subject
Summarized
"Equilibrium" is fancy word for "balance". A system is in equilibrium when it is in balance with the surrounding systems. Any system will naturally attempt to be in equilibrim, and will adapt its physical properties to do so.
A phase change is the transition of matter from a state (solid, liquid, gas, or plasma) to a different state. This happens due to a change in internal energy, changing how a material is bonded.
Now that we have it summarised, lets dig a bit deeper.
Equilibrium
A system always tries to be in balance with its surrounding systems. We maybe don't think about this a lot, but we are all very familiar with this principle since we observe it every day.
If you have a cup of hot cocoa, it will cool down until it has reached ambient temperature. At this point, the cocoa is considered to be in "thermal equilibrium". If we fill a balloon with air, it will expand. It will do so until the air inside the balloon has the same pressure as the air outside the balloon. At this point, the balloon is considered to be in "barometric (pressure) equilibrium".
Just like when we talk about energy, there is a relationship when we talk about equilibriums. We have something we call (you may remember this from basic chemistry) an "ideal gas". An ideal gas is a good way of looking at this principle. Since the temperature, volume and pressure have a direct relationship.
Pressure-volume-temperature diagram for ideal gases.
In the diagram above we can see that if we change one of the three variables, then one (or both) of the other two variables has to change too. For instance, if we heat some air in a canister, the air will try to expand. But being unable to change in volume, it will instead increase pressure. [1]
Phase changes
Any material has a set of phases. The ones we'll discuss are the solid, liquid and gaseous phases. Unless we control the material's environment very carefully, materials will always follow this order when energy is added. Solid becomes liquid, liquid becomes gas, and vice versa. For instance water; ice (solid) becomes water (liquid), water becomes steam (gas). So each of these transformations is a phase change.
So when water is solid (ice), the molecules are in a grid. The molecules do not move around much, maybe a little bit where they stand. But they all still stand in a grid.
When the water gets heated up, the molecules will start to move. Molecules have a natural attraction to each other due to subatomic forces like the van der Waals force. So the molecules will no longer stay in a grid, but they will still keep each other close due to this attraction. So a material that sticks together but freely moves around is called a liquid.
Once the material overcomes this natural attraction, the molecules can go anywhere they want. And that's when we get a gas. Or steam, in the case of water. All of this applies even for materials we don't usually imagine would melt or evaporate, for instance steel.
Here is a visual representation of the three states.
Now comes the fun part. Ice is water that is at 0 degrees Celcius or below. Liquid water is water that is 0 degrees and above. But wait! Does that mean that water can be both solid and liquid at the same temperature? Yes, indeed. A material requires a certain amount of internal energy to become liquid. That is why internal energy and temperature is often used interchangeably, but is not exactly the same.
The water molecules in ice will use the supplied energy to get excited and start moving around. This continues until the solid-liquid water reaches a point where all molecules move around. At that point it has completely become a liquid. While water is in solid-liquid state, the amount of internal energy dictates how much is liquid and how much is solid. The exact same thing happens with water at 100 degrees. It can be steam or liquid, but not fully either until it reaches a certain amount of internal energy.
Here is a diagram of this process.
Another fun tidbit that makes water special: Water has a lower density as a solid than it has as a liquid, when both are at 0 degrees Celcius. This means that per unit of volume ice weighs less than (liquid) water. Therefore ice floats on top of water. This is the only material that behaves in this way. And thats extremely important to our existince, since it helps regulate heat in the ocean.
Steam engines (and implication)
We have learned a few new things today. But there is one really important wrinkle to all of this. A system always will try to be in balance. And this we can exploit. Pressure is a type of "pushing". So thats a type of work! And an increase in thermal energy can lead to an increase in temperature. We remember that from the ideal gas. So if we cleverly organize our system, we can create work from heat! This is the basis behind most heat engines (simplified a ton). We supply thermal energy to some gas or fluid, and extract work from this gas or fluid.
A classical example is the steam engine. We have water inside a closed system. When we heat up the water, it will turn into steam. And this steam will want to be much less dense than water. As a consequence, the pressure inside the water tank increases drastically. We release a small amount of this steam into a closed piston.
Here is an animation of this in action.
The piston suddenly gets a high pressure level. As we remember, it will want to be in equilibrium with its surroundings. Currently the pressure inside the piston is much higher than outside the piston. As we remember from the ideal gas law, a higher volume will mean a lower pressure. So the piston will be moved, as the steam expands to reach a pressure balance. The movement from the piston will drive something, like a wheel. The steam is removed from the expanded piston, and the piston will return to its closed position.[2] Then the process is repeated again and again, to have the piston continously move something.
All that from a bit of water in a tank and some supplied heat.
Whats next?
Next time we will talk about another important property. Entropy! In the previous topic I had a lot of questions regarding the quality of energy types, and what specifies heat from work on an intrinsic level. Entropy is the big answer to this. From that we will also cover the heat death of the universe, which would be a good introduction to "a laypersons introduction to nihilism" if we have any philosophers here.
Note
[1] For solid and fluid materials (as well as non-ideal gassess) this becomes a lot more complicated. If we ever do a "layperson's intro to fluid mechanics" we will cover it then.
[2] This described design is very inefficient and very simplified. Usually the piston is made so steam is supplied in turns supplied to either side of the piston. Then the work will both removed the steam that already performed work as well as move the piston. That way you can have continous movement in both directions.See for instance this image.
17 votes -
Is anti-gravity real? Science is about to find out
10 votes -
A layperson's introduction to LEDs
Introduction I want to give an introduction on several physics topics at a level understandable to laypeople (high school level physics background). Making physics accessible to laypeople is a...
Introduction
I want to give an introduction on several physics topics at a level understandable to laypeople (high school level physics background). Making physics accessible to laypeople is a much discussed topic at universities. It can be very hard to translate the professional terms into a language understandable by people outside the field. So I will take this opportunity to challenge myself to (hopefully) create an understandable introduction to interesting topics in modern physics. To this end, I will take liberties in explaining things, and not always go for full scientific accuracy, while hopefully still getting the core concepts across. If a more in-depth explanation is wanted, please ask in the comments and I will do my best to answer.
Previous topics
Bookmarkable meta post with links to all previous topics
Today's topic
Today's topic will be light emitting diodes, better known as LEDs. As the name suggests, we'll have to discuss light and diodes. We will find out why LEDs can only emit a single colour and why they don't get hot like other sources of light. Let's start by discussing diodes, in case you are already familiar with diodes note that I will limit the discussion to semiconductor (p-n with a direct bandgap) diodes as that's the type that's used in LEDs.
What's a diode?
A diode is an electronic component that, ideally, only lets electric current through in one direction. In other words it's a good resistor when the current flows in one direction and a really good conductor when the current flows in the other direction. Let's look a bit closer at how diodes function.
Semiconductors
Diodes are made out of two different semiconducting materials. In everyday life we tend to classify materials as either conducting (metals being the prime example) or non-conducting (wood, plastics, rubber). Conductance is the flow of electrons through a material, a conducting material has a lot of electrons that can move freely through a material while an insulator has none. Semiconducting materials fall in between these two categories. They do conduct but not a lot, so in other words they have a few electrons that can move freely.
N-type semiconductors
We are able to change a semiconductor's conductivity by adding tiny amounts of other materials, this is called doping. As an example, we can take silicon (the stuff that the device you're reading this on is made out of) which is the most well-known semiconductor. Pure silicon will form a crystal structure where each silicon atom has 4 neighbours, and each atom will share 1 electron with each neighbour. Now we add a little bit of a material that can share 5 electrons with its neighbours (how generous!). What will happen? Four of its shareable electrons are busy being shared with neighbours and won't leave the vicinity of the atom, but the fifth can't be shared and is now free to move around the material! So this means we added more freely flowing electron and that the conductivity of the semiconductor increases. An illustration of this process is provided here, Si is chemistry-talk for silicon and P is chemistry-talk for phosphorus, a material with 5 shareable electrons. This kind of doping is called n-type doping because we added more electrons, which have a negative charge, that can freely move.
P-type semiconductors
We can do the same thing by adding a material that's a bit stingy and is only willing to share 3 electrons, for example boron. Think for a moment what will happen in this case. One of the silicon atoms neighbouring a boron atom will want to share an electron, but the boron atom is already sharing all of its atoms. This attracts other electrons that are nearby, one of them will move in to allow the boron atom to share a fourth electron. However, this will create the same problem elsewhere in our material. Which will also get compensated, but this just creates the same problem once more in yet another location. So what we now have is a hole, a place where an electron should be but isn't, that is moving around the crystal. So in effect we created a freely moving positive charged hole. We call this type of doping p-type. Here's an illustration with B the boron atoms.
Creating a diode
So what would happen if we took a n-type semiconductor and a p-type semiconductor and pushed them against one another? Suddenly the extra free-flowing electrons of the n-type semiconductor have a purpose; to fill the holes in the p-type. So these electrons rush over and fill the holes nearest to the junction between the two semiconductors. However, as they do this a charge imbalance is created. Suddenly the region of p-type semiconductor that is near the junction has an abundance of electrons relative to the positive charges of the atom cores. A net negative charge is created in the p-type semiconductor. Similarly, the swift exit of the electrons from the n-type semiconductor means the charge of the cores there isn't compensated, so the region of the n-type semiconductor near the junction is now positively charged. This creates a barrier, the remaining free electrons of the n-type cannot reach the far-away holes of the p-type because they have to get through the big net negative charge of the p-type near the junction. Illustration here. We have now created a diode!
How diodes work
Think for a moment what will happen if we send current* (which is just a bunch of electrons moving) from the p-type towards the n-type. The incoming electrons will face the negative charge barrier of the p-type and be unable to continue. This means there is no current. In other words the diode has a high resistance. Now let's flip things around and send electrons through the other way. Now they will come across the positive charge barrier of the n-type semiconductor and be attracted to the barrier instead. The electrons' negative charge compensates the net positive charge of the barrier on the n-type and it will vanish. This destroys the equilibrium situation of the barrier. The p-type holes are no longer repelled by the positive barrier of the n-type (as it no longer exists) and move closer to the junction, this means the entire barrier will fade and current can move through. We now have a conductor.
OK, but I don't see what this has to do with light
Now let's find out how we can create light using this method. When current is applied to a diode what happens is that one side of the diode is at a higher energy than the other side. This is what motivates the electrons to move, they want to go from high energy to low energy. If the p-type semiconductor is at a higher energy than the n-type the electron will, upon crossing the junction between the two types, go from a high energy level to a lower one. This difference in energy must be compensated because (as @ducks mentioned in his thermodynamics post) energy cannot be destroyed. So where does the energy go? It gets turned into light!
The energy difference between the p-type and n-type is fixed, meaning a fixed amount of energy is released each time an electron crosses the junction. This means the light is of a single colour (colour is how we perceive the wavelength of light, which is determined by the energy of the light wave). Furthermore, none of the energy is lost so there is no energy being turned into heat, in other words the LED does not get warm.
Conclusion
So now we know why the LED is so power-efficient; it does not turn any energy into heat, it all goes into light. We now also know why they only emit a single colour, because the energy released when an electron crosses the junction is fixed.
Next time
I think next time I will try to tackle the concept of wave functions in quantum mechanics.
Feedback
As usual, please let me know where I missed the mark. Also let me know if things are not clear to you, I will try to explain further in the comments!
Addendum
*) Yes, current flow is defined to be opposite to the flow of the electrons, but I don't want to confuse readers with annoying definitions.
34 votes -
A layperson's introduction to Thermodynamics, part 1: Energy, work, heat
Intro Hello everyone, @wanda-seldon has been giving us an introduction to quantum physics. For now, she will be given a short break to prepare new stuff. In the meantime I will be covering some...
Intro
Hello everyone,
@wanda-seldon has been giving us an introduction to quantum physics. For now, she will be given a short break to prepare new stuff. In the meantime I will be covering some classical mechanics, more specifically thermodynamics. In part 1, we need to work our way through some of the more dry concepts, so we can understand and appreciate the horrifying implications of the fun parts. So I promise, this will be the most verbose one.
Some of you may have briefly seen a version of this posted, that was due to me misunderstanding the schedule with @wanda-seldon. If you saw that one, I will mention I rewrote nearly all of it to be more readable.
Now, on today's agenda: The basics of heat, work and energy and how it's all related.
Previous posts can be found here: https://tildes.net/~science/8al/meta_post_for_a_laypersons_introduction_to_series
Important note
If @wanda-seldon in her posts mention "energy", it's most likely in the context of energy operators, which is a concept in quantum physics. I'm not going to pretend I understand them, so I will not be explaining the difference. We will cover what energy is in classical mechanics. So keep that in mind if you read something from either of us.
Subject
Summarized
What is heat? Using a lot of fancy words we can describe it as follows. Heat is an energy that is transferred between systems by thermal interaction. And what is work? Work is an energy that is applied in a way that performs... work. The combined energy in a system is called internal energy. This type of energy can be transformed or applied to other systems.
These are a lot of new words, so lets break that down a bit.
Systems
A system is just a catch-all term for something that can be defined with a boundary of sorts. Be it mass, volume, shape, container, position, etc. A canister, your tea mug, the steam inside a boiler, your body, a cloud, a room, earth, etc. They are all systems because you can in some way define what is within the boundary, and what is beyond the boundary.
In theory, you could define every single nucleid in the universe as an unique system. But that would be counter-intuitive. In thermodynamics we tend to lump things into a system, and treat it as one thing. As opposed to Quantum stuff that looks at the smallest quantity. Calculating every single water molecule in my coffee would be pure insanity. So we just treat my mug as the boundary, and the tea inside the mug as the system. And just so it's mentioned, systems can contain systems, for instance a tea mug inside a room.
Energy
Energy is some quantifiable property that comes in either the form of heat, work. It can be transferred to other systems, or change between the different energy types. An example of transfer is my coffee cooling down because it's in a cold room. That means heat has been transferred from one system (my mug) to another system (the room). Alternatively you could say my hot coffee mug is warming up the room, or that the room is cooling down my coffee. Thermodynamics is a LOT about perspective. An example of transforming energy types is when we rub our hands together. That way we convert work (rubbing) into heat. It's really not more complicated than that. An interaction in this case is just a system having an effect on a different system. So a thermal interaction means it's an interaction due to heat (like in the mug example).
This brings us to an extremely important point. So important, it's considered "law". The first law of thermodynamics even. Energy cannot be destroyed, it can only change forms.
Your battery charge is never really lost. Neither is the heat of your mug of coffee. It just changed form or went somewhere else. The combined energy of all types that is residing inside a system is called internal energy.
Heat and work
Let's say we have a system, like a room. And all windows and doors are closed, so no energy can leave. In this system, you have a running table fan connected to a power line, getting energy from outside the system. The table fan is making you feel cool. Is the fan cooling down the room, heating up the room, or doing nothing? Think about it for a moment.
The first thought of many would be to think that this fan would cool the room down, it sure makes you feel cooler! But it's actually heating up the room. As we remember, internal energy is the energy inside a system (room, in this case). The fan is getting energy from outside, and uses this energy to perform work. The fan accelerates the air inside the room, and this accelerated air will evaporate some of your sweat, so you feel cool. But as we remember, energy cannot be destroyed. So we are importing energy into the system, increasing the internal energy. Some of the work from the fan is also directly converted to heat, since the motor of the fan will get hot.
So if we are not getting rid of any of this excess energy, we are increasing the internal energy. And therefore actively increasing the temperature of the room.
To use a more tangible example: Simplified, this phenomena is why green house gases are bad. Lets define earth as a system. Earth gets a lot of energy from the sun. And a lot of this energy will be reflected and sent back to space. Green house gases will reflect back some of this energy trying to leave earth. So instead of having a roughly equal amount of energy enter the system (from the sun, from us doing stuff, etc) that leaves out in space, we have an increasing amount of energy on earth. This, as a consequence, increases temperature.
Implications
Now, what are the maybe not so obvious implications of this?
Waste heat, from supplied energy or inefficient work is a constant headache in engineering. If we cannot remove enough heat, we will actively heat up objects until they are destroyed. Thats why good cooling systems are important in cars, computers, etc.
Whats next?
Now this was not so bad. In the future we will cover phase changes, equilibriums, entropy, the heat death of the universe and briefly touch upon engines. So thats most likely two more parts after this. After that @wanda-seldon will take over again.
I plan on doing one main part per week, but if something is asked that warrants a small topic I might do smaller ones inbetween.
Feedback
Something unclear? Got questions? Got feedback? Or requests of topics to cover? Leave a comment.
19 votes -
Triple the apparatuses, triple the weirdness: a layperson's introduction to quantisation and spin, part 2
EDIT: With the help of @ducks the post now has illustrations to clear up the experimental set-up. Introduction I want to give an introduction on several physics topics at a level understandable to...
EDIT: With the help of @ducks the post now has illustrations to clear up the experimental set-up.
Introduction
I want to give an introduction on several physics topics at a level understandable to laypeople (high school level physics background). Making physics accessible to laypeople is a much discussed topic at universities. It can be very hard to translate the professional terms into a language understandable by people outside the field. So I will take this opportunity to challenge myself to (hopefully) create an understandable introduction to interesting topics in modern physics. To this end, I will take liberties in explaining things, and not always go for full scientific accuracy, while hopefully still getting the core concepts across. If a more in-depth explanation is wanted, please ask in the comments and I will do my best to answer.
Previous topics
Spintronics
Quantum Oscillations
Quantisation and spin, part 1Today's topic
Today's topic will be a continuation of the topics discussed in my last post. So if you haven't, please read part 1 first (see link above). We will be sending particles through two Stern-Gerlach apparatuses and then we'll put the particles through three of them. We will discuss our observations and draw some very interesting conclusions from it on the quantum nature of our universe. Not bad for a single experiment that can be performed easily!
Rotating the Stern-Gerlach apparatus
We will start simple and rotate the set-up of the last post 90 degrees so that the magnets face left and right instead of up and down. Now let's think for a moment what we expect would happen if we sent silver atoms through this setup. Logically, there should not be in any difference in outcome if we rotate our experiment 90 degrees (neglecting gravity, whose strength is very low compared to the strength of the magnets). This is a core concept of physics, there are no "privileged" frames of reference in which the results would be more correct. So it is reasonable to assume that the atoms would split left and right in the same way they split up and down last time. This is indeed what happens when we perform the experiment. Great!
Two Stern-Gerlach apparatuses
Let's continue our discussion by chaining two Stern-Gerlach apparatuses together. The first apparatus will be oriented up-down, the second one left-right. We will be sending silver atoms with unknown spin through the first apparatus. As we learned in the previous post, this will cause them to separate into spin-up and spin-down states. Now we take only the spin-up silver atoms and send them into the second apparatus, which is rotated 90 degrees compared to the first one. Let's think for a moment what we expect would happen. It would be reasonable to assume that spin-left and spin-right would both appear 50% of the time, even if the silver atoms all have spin-up too. We don't really have a reason to assume a particle cannot both have spin up and spin right, or spin up and spin left. And indeed, once again we find a 50% split between spin-left and spin-right at the end of our second apparatus. Illustration here.
Three Stern-Gerlach apparatuses and a massive violation of common sense
So it would seem silver atoms have spin up or down as a property, and spin left or spin right as another property. Makes sense to me. To be sure, we take all the silver atoms that went up at the end of the first apparatus and right at the end of the second apparatus and send them through a third apparatus which is oriented up-down (so the same way as the first). Surely, all these atoms are spin-up so they will all come out up top again. We test this and find... a 50-50 split between up and down. Wait, what?
Remember that in the previous post I briefly mentioned that if you take two apparatuses who are both up-down oriented and send only the spin-up atoms through the second one they all come out up top again. So why now suddenly do they decide to split 50-50 again? We have to conclude that being forced to choose spin-left or spin-right causes the atoms to forget if they were spin-up or spin-down.
This result forces us to fundamentally reconsider how we describe the universe. We have to introduce the concepts of superposition and wave function collapse to be able to explain these results.
Superpositions, collapse and the meaning of observing in quantum physics
The way physicists make sense of the kind of behaviour described above is by saying the particles start out in a superposition; before the first experiment they are 50% in the up-state and 50% in the down-state at the same time. We can write this as 50%[spin up]+50%[spin down], and we call this a wave function. Once we send the particles through the first Stern-Gerlach apparatus each one will be forced to choose to exhibit spin-up or spin-down behaviour. At this point they are said to undergo (wave function) collapse; they are now in either the 100%[spin up] or 100%[spin down] state. This is the meaning of observing in quantum mechanics, once we interact with a property of an atom (or any particle, or even a cat) that is in a superposition this superposition is forced to collapse into a single definite state, in this case the property spin is in a superposition and upon observing is forced to collapse to spin up or spin down.
However, once we send our particles through the second apparatus, they are forced to collapse into 100%[spin left] or 100%[spin right]. As we saw above, this somehow also makes them go back into the 50%[spin up]+50%[spin down] state. The particles cannot collapse into both a definite [spin up] or [spin down] state and a definite [spin left] or [spin right] state. Knowing one precludes knowing the other. An illustration can be seen here.
This has far reaching consequences for how knowable our universe it. Even if we can perfectly describe the universe and everything in it, we still cannot know such simple things as whether a silver atom will go left or right in a magnetic field - if we know it would go up or down. It's not just that we aren't good enough at measuring, it's fundamentally unknowable. Our universe is inherently random.
Conclusion
In these two posts we have broken the laws of classical physics and were forced to create a whole new theory to describe how our universe works. We found out our universe is unknowable and inherently random. Even if we could know all the information of the state our universe is in right now, we still would not be able to track perfectly how our universe would evolve, due to the inherent chance that is baked into it.
Next time
Well that was quite mind-blowing. Next time I might discuss fermions vs bosons, two types of particles that classify all (normal) matter in the universe and that have wildly different properties. But first @ducks will take over this series for a few posts and talk about classical physics and engineering.
Feedback
As always, please feel free to ask for clarification and give me feedback on which parts of the post could me made clearer. Feel free to discuss the implications for humanity to exist in a universe that is inherently random and unknowable.
Addendum
Observant readers might argue that in this particular case we could just as well have described spin as a simple property that will align itself to the magnets. However, we find the same type of behaviour happens with angles other than 90 degrees. Say the second apparatus is at an angle phi to the first apparatus, then the chance of the particles deflecting one way is cos^2(phi/2)[up] and sin^2(phi/2)[down]. So even if there's only a 1 degree difference between the two apparatuses, there's still a chance that the spin will come out 89 degrees rotated rather than 1 degree rotated.
32 votes -
A layperson's introduction to quantisation and spin, part 1
Introduction I want to give an introduction on several physics topics at a level understandable to laypeople (high school level physics background). Making physics accessible to laypeople is a...
Introduction
I want to give an introduction on several physics topics at a level understandable to laypeople (high school level physics background). Making physics accessible to laypeople is a much discussed topic at universities. It can be very hard to translate the professional terms into a language understandable by people outside the field. So I will take this opportunity to challenge myself to (hopefully) create an understandable introduction to interesting topics in modern physics. To this end, I will take liberties in explaining things, and not always go for full scientific accuracy, while hopefully still getting the core concepts across. If a more in-depth explanation is wanted, please ask in the comments and I will do my best to answer.
Previous topics
Spintronics
Quantum OscillationsToday's topic
Today's topic will be quantisation, explained through the results of the Stern-Gerlach experiment which was first performed in 1922. This topic treats a much more fundamental concept of quantum physics than my previous topics.
What is the Stern-Gerlach experiment?
In 1922 physicists Stern and Gerlach set up an experiment where they shot silver atoms through a magnetic field, the results of this experiment gave conclusive support for the concept of quantisation. I will now first explain the experiment and then, using the results, explain what quantisation is. If you would rather watch a video on the experiment, wikipedia provided one here, it can be watched without sound. Note that I will dive a bit deeper into the results than this video does.
The experiment consists of two magnets, put on top of each other with a gap in the middle. The top magnet has its north pole facing the gap, the bottom magnet has its south pole facing the gap. See this illustration. Now we can shoot things through the gap. What do we expect would happen? Let's first shoot through simple bar magnets. Depending on how its poles are oriented, it will either bend downwards, upwards or not at all. If the bar magnet's north pole is facing the top magnet, it will be pushed downwards (because then north is facing north). If the bar magnet's south pole is facing the top magnet, it will instead be pushed upwards. If the bar magnet's poles are at a 90 degree angle to the two magnets it will fly straight through, without bending. Lastly, if the bar magnet's poles are at any other angle, say 45 degrees, it will still bend but less so. If we send through a lot of magnets, all with a random orientation, and measure how much they got deflected at the other side of the set-up we expect to see a line, see 4 in the illustration.
Now we'll send through atoms, Stern and Gerlach chose silver atoms because they were easy to generate back in 1922 and because they have so-called spin, which we will get back to shortly. We send these silver atoms through in the same way we sent through the bar magnets; lots of them and all of them with a random orientation. Now what will happen? As it turns out all the atoms will either end up being deflected all the way up or all the way down, with nothing in between. 50% will be bent upwards, 50% downwards. So silver atoms seem to respond as if they were bar magnets that either bend maximally up or maximally down. In the illustration this is labeled 5.
If we were to take only the silver atoms that bent upwards and sent them through the experiment again, all of them would bend upwards again. They seem to remember if they previously went up or down rather than just deciding on the spot each time if they go up or down. What model can we think of that would explain this behaviour? The silver atoms must have some property that will make them decide to bend up or down. Let's call this property spin, and say that if the silver atoms chose to bend up they have spin up, if they chose to bend down they have spin down. It seems that these are the only two values spin can have, because we see them bend either maximally up or maximally down. So we can say the spin is quantised; it has two discrete values, up or down, and nothing in between.
Conclusion
We have found a property of atoms (and indeed other particles like electrons have spin too) that is quantised. This goes against classical physics where properties are continuous. This shows one of the ways in which physics at the smallest scales is fundamentally different from the physics of everyday life.
Next time
Next time we will investigate what happens when we rotate the angle of the magnets used in the experiment. This will lead us to discover other fundamental aspects of physics and nature, quantum superpositions and the inherent randomness of nature.
EDIT: part 2 is now up here.
Feedback
As discussed in the last post, I am trying something different for this post. Talking about more fundamental quantum physics that was discovered 100 years ago rather than modern physics. Did you like it? Let me know in the comments!
30 votes -
A layperson's introduction to quantum oscillations
Introduction and motivation In an effort to get more content on Tildes, I want to try and give an introduction on several 'hot topics' in condensed matter physics at a level understandable to...
Introduction and motivation
In an effort to get more content on Tildes, I want to try and give an introduction on several 'hot topics' in condensed matter physics at a level understandable to laypeople (high school level physics background). Making physics accessible to laypeople is a much discussed topic at universities. It can be very hard to translate the professional terms into a language understandable by people outside the field. So I will take this opportunity to challenge myself to (hopefully) create an understandable introduction to interesting topics in modern physics. To this end, I will take liberties in explaining things, and not always go for full scientific accuracy, while hopefully still getting the core concepts across. If a more in-depth explanation is wanted, please ask in the comments and I will do my best to answer.
Previous topics
Why has it been 100 days since the last post?
I had a different topic planned as a second post, however it turned out I had to explain a lot more concepts that I anticipated so that it would no longer fit this format. Then I got busy. Now I finally found a topic I think I can do justice in this format.
Today's topic
Today's topic will be quantum oscillations.
What are quantum oscillations?
Quantum oscillations are periodic fluctuations in some materials' properties when it is exposed to a strong magnet. As the name suggests, this effect arises from quantum physics. Nevertheless, I think it's relatively easy to give a feel on how it works. In the rest of this post I will focus on one kind of quantum oscillation, the oscillation of a material's resistance (with the very fancy name Shubnikov-de Haas oscillations), because electrical resistance is a concept most people are familiar with. However, there are many other material properties that fluctuate similarly.
What do quantum oscillations look like?
Let's start from the basics, electrical resistance. Electrical resistance tells you how hard it is for an electrical current to flow through a material. Related to this is conductance, which instead tells you how easy it is for a current to flow through a material (so it is the inverse of the resistance). Now, something funny happens to some metals' conductance when you expose them to a strong magnet.
Let's think for a moment on what we expect would happen. Would the conductivity be affected by the magnet? Perhaps a stronger magnet would increase the conductivity, or reduce it. What we most certainly wouldn't expect to happen is for the conductivity to go up and down as we increase the strength of the magnet we aimed at the material. Yet, this is exactly what happens. In this picture we see the conductivity (expressed on the vertical axis) plotted against the magnetic field (expressed on the horizontal axis). The conductivity is going up and down like crazy!
Why is this happening?
One of quantum physics core principle is quantisation (who'd have thought). And as it turns out, this quantisation is at the core of this behaviour. For the purpose of this post, quantisation can be thought of as energies at which the electrons are allowed to have.
Normally, when electrons are in a metal, there are no real restrictions on what energy they are allowed to have. Some electrons will not have a lot of energy and won't move, other electrons will have a lot of energy and be able to move freely around the metal.
However, when metals are put in a strong magnetic field the energies of the low energy electrons are allowed to have changes drastically. The electrons are only allowed to be at certain energies, with a wide gaps in between these energies. Crucially, the exact values of these energies change with the strength of the magnet.
This means that at some magnet strengths, the allowed low-energy energies will nicely line up with the energies the free-flowing electrons have. This means some of those electrons will interfere with the free flowing electrons, making it harder for them to flow freely*. This interference in electron flow means less conductance! Then, when we change the magnetic field so that the energies are no longer aligned, the free flowing electrons no longer get caught and will be able to move freely, so that the conductivity goes up again. This pattern becomes more pronounced as the magnetic field strength increases.
What is it good for?
These oscillations were first noticed in bismuth by Shubnikov and de Haas in the year 1930. It was direct evidence for the quantum mechanics underlying nature. These days quantum oscillations are a popular method to extract information on a metals, alloys and semimetals' properties. These techniques have been used to, for example, further our understanding of high temperature superconductivity.
Sources
D Shoenberg - Magnetic Oscillations in Metals (1984)
*more technically: the probability of scattering is proportional to the number of states into which the electron can be scattered, which is given by the number of available states near the energy surface of the material.
32 votes -
My grandfather thought he solved a cosmic mystery
8 votes -
Shockwave shadows in ultra slow motion (Bullet schlieren)
2 votes -
Does time exist? | Andrew Zimmerman Jones
8 votes -
What are the strings in String Theory?
7 votes -
The reason we haven’t directly detected dark matter
10 votes -
Everything you thought you knew about quantum physics is different
5 votes -
Arthur Ashkin’s optical tweezers: The Nobel Prize-winning technology that changed biology
5 votes -
Donna Strickland won her Nobel prize in Physics before she got a Wikipedia page
11 votes -
The 2018 Nobel Prize in Physics is awarded to Arthur Ashkin, Gérard Mourou, and Donna Strickland for their groundbreaking inventions in the field of laser physics
12 votes -
Physicists investigate why matter and antimatter are not mirror images - Does antimatter fall up, or fall down?
9 votes -
Shockwaves from WWII bombing raids reached the edge of space
13 votes -
Bizarre particles keep flying out of Antarctica's ice, and they might shatter modern physics
14 votes -
He got the Nobel. She got nothing. Now she's won a huge prize and she's giving it all away
9 votes -
Long-sought decay of Higgs boson observed
18 votes -
Stephen Hawking on what makes a good theory and the quest for a Theory of Everything
5 votes