Nicole Yunger Halpern


Recording 2022-Nov-22
Google Scholar
arXiv

Conversation Highlights
What are the different kinds of entropy?
How is the second law of thermodynamics possibly limited in scope?
Can a quantum system break the second law?
Can a quantum engine break the Carnot bound?
What is a key thermodynamic property of a clock?
How was our understanding of quantum systems influenced by classical thermodynamics?
How can qubits have a temperature below zero?
When can we expect commercial applications of quantum computing?
How can we define work and heat in a quantum setting?
What are quantum refrigerators?
What is the importance of coherence as a thermodynamic resource?
What are Robin Hood transfers and algorithmic cooling?
What is the possibility of finding quantum effects in biology?

People, books, and articles mentioned
Articles on Quantum Biology By Clarice D. Aiello
The Janus Point By Julian Barbour
David Kaiser

Related IAT sessions
Paul Erker: Time, Clocks, and Thermodynamics
Mark Van Raamsdonk: How Time Emerges From Entanglement

Paul: It’s nine o’clock on Saturday morning, November the 22nd. Good morning, everyone, or good afternoon, or good evening wherever you happen to be in the world. So welcome to the It’s About Time clubhouse room. It’s a beautiful day in Menlo Park, California if you are here, welcome. My name is Paul Borrill. I’m your host for the It’s About Time club. This session is being recorded. If you come on to the clubhouse stage, please be aware that you are consenting to being recorded. You are listening to the room. It’s About Time.

I’m Paul Borrill. A physicist and computer scientist; founder and CEO of a company developing causality preserving networks for building resilient decentralized infrastructures for Earth and Mars. I served on Apple’s infrastructure team, as VP and CTO and for Veritas software, I was also the Vice President and Chief Architect for storage systems at Quantum Corporation. I have a PhD in physics, from University College London, and I’m a graduate of the Stanford executive program, and my lifelong obsession with dependable computing came from working with NASA designing computer systems and software for an experiment which performed extraordinarily well on the last successful flight of the Challenger.

This morning, we are honored with the presence of Nicole Yunger Halpern. Nicole is a theoretical physicist at the National Institute for Standards and Technology, a Fellow of the Joint Center for Quantum Information and Computer Science, and an adjunct assistant professor at the University of Maryland. Nicole re-envisions 19th-century thermodynamics for the 21st century, using quantum information theory. She has dubbed this research “quantum steampunk,” after the steampunk genre of art and literature that juxtaposes Victorian settings with futuristic technologies. Nicole completed her PhD at Caltech, winning the international Ilya Prigogine Prize for a thermodynamics thesis. While an ITAMP Postdoctoral Fellow at Harvard University, she won the International Quantum Technology Emerging Researcher Award. Nicole is also the author of the upcoming book for the general public: Quantum Steampunk: The Physics of Yesterday’s Tomorrow

The description of today’s talk is: Victorian thermodynamics crystallized the notion of entropy, which quantum information science has extended. Fusing these modern and antiquated sciences, quantum thermodynamics is real-life steampunk.

Paul: So Nicole, welcome to the It’s About Time Stage today. Thank you for for being here. Please tell us how you first got interested in the foundations of physics, and what brings you to the discussion about spacetime and causality?

Nicole: Thanks for hosting me It’s wonderful to be here. I have long been interested in abstract ideas. And in part, my interest in physics comes from a long standing interest in the metaphysical.

Partially, my interest in the foundations of quantum physics comes from having been reading pretty much all the time when I was growing up, I was usually carrying a book, I would read, sometimes at meals, while waiting for meals while waiting to get picked up from school.

Reading all the time taught me to build worlds in my head. I feel like that’s what I get to do for work now as a theoretical physicist. I’ve also long been interested in what is foundational and metaphysics in particular, that interest was kindled by my high school philosophy teacher who taught me about Plato’s cave, and so on and so forth. He was fascinated by quantum physics and special relativity though he would be the first one to say that he didn’t understand them. But he was very curious. And he passed along his curiosity to me. When I went to college, I looked for the background behind what had interested us both. I had a very strong interest in math and physics, I was really interested in everything, and I had taken as much physics as I could in high school, so I put together a major that was a combination of physics, it was mostly physics, but it also had philosophy, history and mathematics in it. It’s this kind of well rounded view showed me that it was physics that I wanted to focus on very deeply and also pointed to the intersection of quantum information theory and thermodynamics as the topics that I really needed to pursue. In quantum information theory and in thermodynamics, we can achieve a lovely balance. On the one hand, these fields teach us about the natures of information and time and how the universe changes, so these are very foundational topics. But on the other hand, quantum information theory can be applied in technologies such as quantum computing, and quantum sensors and quantum clocks. Meanwhile, thermodynamics is relevant to machines, engines, refrigerators, batteries. So I was drawn to, on the one hand, the abstract and fundamental unsatisfying nature of these fields. And on the other hand, how it was at least relatively easy to convince myself that I was doing something useful whilst studying this intersection.

Paul: That’s super interesting. I when I saw your Dartmouth speech, I was very impressed. You said that you fell in love with this notion of the tensor product. Can you tell us what the tensor product is and why you fell in love with it so much?

Nicole: I probably ought to rewatch that speech. I haven’t thought about it very much in years. But I am still a fan of the tensor product. The tensor product is used a great deal in quantum theory. Suppose that we have two quantum systems such as two atoms. Each of those systems is represented mathematically with a Hilbert space. And if we want to represent the set of two systems, then we do so with a tensor product of the two Hilbert spaces. The tensor product is… and these Hilbert spaces are related to linear algebra or they’re a part of linear algebra, which is kind of the algebra that we learned in middle school and high school except on steroids. I found that linear algebra is used a great deal in quantum information theory, and I did fall in love with that math. There are lots of things that have been proved in linear algebra. And there’s a great deal of structure, you can learn linear algebra from the bottom up. In contrast, in a field like differential equations, we have a grab bag of results. A few things have been proved here and there. But there’s very little that’s a complete system, as a way of thinking about differential equations. People just proved whatever they’ve been happened to be able to. In contrast, linear algebra has this lovely structure that I find kind of soothing.

Paul: Wonderful. So let’s back up a second here and just explain a few things to our audience. So what is entropy to you as a an information theorist, as well as the thermodynamicist?

Nicole: I think of entropy as a measure of ignorance or uncertainty about something that’s going to happen. For instance, I lived for a few years in the Boston area. And what weather pattern would be dominant, on any given day in the Boston area is a very random variable. Any given day practically could be sunny, or rainy, or snowy, or very windy, or completely calm. Suppose that we find out what the dominant weather is on any given day, then we’re surprised to some extent that we learned some information, like quantification of the amount of surprise we feel, or the amount we learn is a measure of how uncertain we were, how ignorant we were. And that quantity is an entropic quantity. There are different entropic quantities for different settings. For instance, suppose that we learn the dominant weather in the Boston area on each of many, many given days. And then we average how much we learned in a day, over all, the days. That’s a different entropic quantity, that’s called the Shannon entropy, which is probably familiar, the most familiar entropy. There are these different entropies that are relevant to different situations, and so quantify our uncertainties in different contexts, for instance, about a single day or on average. And I can say more about the different flavors of entropy later, since that’s one of my favorite topics, and entropy is so closely related to the flow of time. So an entropy, we can say, is a measure of ignorance or uncertainty that sounds very information theoretic. But in addition to being an information theorist, I’m a thermodynamicist. And probably people here are very familiar with the notion that entropy plays an important role in thermodynamics, particularly the second law of thermodynamics, which helps us understand the flow of time. Suppose that, to understand what thermodynamic entropy is, I like to think about a system of many many particles, for simplicity we can think of them as classical particles. For instance, suppose that we’ve just baked a scone and it’s sitting on countertop in a kitchen that’s closed. The molecules of scent that are being given off by that scone, at first, well, they have, they feel some volume, they have some energy, they have some pressure. So there are these large scale characteristics of that gas of molecules. And if we’re talking about a system of many many many particles, then what we tend to be able to know about that system is just these large scale properties, such as the total energy or the temperature, and the pressure. But each of these sets of large scale characteristics, some value for the energy, some value for the volume, some value for the total number of particles. That set is consistent with many, many, many different microscopic configurations. So the particles can be over here in the clump or over there and the clump, they could moving be moving in this direction or that direction or that direction. We just don’t have the resources to measure all of the particles configurations, where they are and how they’re moving and so on. So if we know just the set of large scale, we call them macroscopic characteristics about the whole system. We are very ignorant about which microscopic configuration the system is in. And that ignorance is measured by the thermodynamic entropy.

Paul: Here’s another follow up question. So most people are familiar with the concept of entropy, singular, but your book says that there are many kinds of entropies. So what are they, and how should we think about them?

Nicole: I described an entropy as a measure of uncertainty or ignorance. Entropy is also have other roles, they quantify the best efficiencies with which we can perform useful tasks, such as sending information from one person to another, or charging a battery, something like this. I already hinted a little at how there are a couple, at least two, different entropies in information theory. We said that in the example of the Boston weather, if we learn the dominant weather on one day in the Boston area, then how much information we have learned. So how ignorant we used to be, is given by one entropic quantity. And if we average that ignorance over many, many, many days, then we get another entropic quantity, the Shannon entropy. We can reframe the story in terms of the task, we would call it an operational task of predicting the weather on any given day. Some people would be interested in nailing the weather on every day, other people might not care about being so specific, they might care just about the average. And so different entropic quantities, quantify how well we can perform in different tasks. In a more conventional information theoretic setting, we would have a task such as data compression. Suppose that I want to send you a message, and we agree in advance on the possible messages that I might send. And you could know the probability that I send this message or that message or that message. Suppose initially, for the sake of being able to prove something easily about this problem, I send you many, many, many, many copies of this message. I want to take all of those copies of the message and compress them into the smallest possible number of bits, or basic units of information. I’m performing the operational task of data compression. And we can ask, how efficiently can I do that? How many basic units of information do I need to send you per copy of my message? The answer was proven by Claude Shannon, the founder of information theory, to be the Shannon entropy of my message. That is, your average uncertainty, averaged over messages about which message I’m sending. But suppose that I don’t need to send you many, many, many copies of the message. Suppose that I need to send you only five copies, or only one copy. These tasks are different than the many, many copies task. There are different entropies that tell how I can compress the data, how many basic units of information I need. For instance one of these entropies is called the the max entropy. There’s a whole family of entropies, and they’re different entropies suited for different tasks, and information theorists even like to take the entropies that exist and define yet more entropies. Some of them are more general, so if you take one limit of this entropy, you’ll get one known entropy that’s already existing, and if you take this general entropy, that’s kind of like a mother entropy, you take a different limit of it, then you get a different entropy that’s already known. So there’s this kind of whole industry in information theory of defining entropies that have uses in operational tasks.

Paul: Fascinating. So how is the the second law of thermodynamics being defined in contemporary physics?

Nicole: For background, I imagine that a number of people in this clubhouse room are familiar with the second law of thermodynamics. But just to ensure that we’re on the same page, I should probably give a little background about the second law itself.

The second law of thermodynamics can be stated in many different ways. But the way that I usually default to is, suppose that you have a closed isolated system, then the thermodynamic entropy of that system will tend to increase or remain constant in time. Another way of putting the second law is that is closely related is, suppose that you have some thermodynamic system, such as the molecules of scent that have been emitted by a scone that we’ve just taken out of the oven. That thermodynamic system is in some large scale state, then we can imagine, for instance, in concrete example, those molecules will begin clumped around the scone that we’ve just taken out of the oven. We can have in mind a different possible large scale state. For instance, we could imagine those molecules spread out all across the kitchen. We can ask, is it possible for our system to transition spontaneously, from this initial large scale states to the final large scale state? We answer that question by using the second law of thermodynamics. If our system is closed and isolated, then we calculate the thermodynamic entropy of the initial configuration, the molecules all clumped together. We calculate the entropy of the final configuration, the molecules spread out across the kitchen. We ask, does that entropy increase or remain constant? If and only if the answer is yes, the transition can happen spontaneously.

So the second law answers this question, can this state transform spontaneously into that state via the checking of whether one inequality holds. But the second law is somewhat limited in its scope. When we use the second law of thermodynamics, we implicitly have in mind a system that we would say is thermodynamically large. But technically infinitely large. And I don’t have infinitely large systems, we might be interested in small systems, single molecules, we might be interested in quantum systems.

Also, the second law of thermodynamics was built for equilibrium states, and equilibrium states is a very quiescent state. If a system like this gas of scent molecules is in an equilibrium state, then on a large scale, nothing really is happening. Its large scale properties such as the number of particles, the volume, the energy and so on, are approximately constant. But, so much of the world is far from equilibrium. For instance, we as living beings are far from equilibrium. So we might want to generalize the second law of thermodynamics also to arbitrary states, not just equilibrium states.

Paul: Your book mentions even more second laws of thermodynamics. Is that what you’re about to tell us?

Nicole: Yes, exactly.

Paul: Thank you.

Nicole: There are a number of second laws of thermodynamics. Sometimes I feel like almost everybody has come up with their own second laws of thermodynamics. One set that I’ve dealt with a fair amount was developed at this intersection of quantum information theory and thermodynamics. Suppose that we have a quantum system like an atom that is in some states, some quantum state. Maybe it has a lot of energy, and it’s in some environment that may be a fixed temperature, maybe the environment has some its own average number of particle, and we’re wondering whether this quantum system or actually couldn’t be a classical system, this framework is pretty general, could transform from its current state to another possible state, via something akin to a spontaneous process of a thermalizing type of process. This question is more general because it allows for quantum systems that allows for small systems that allows for far from equilibrium states, so we can answer this question, but since it’s more general, then what’s answered by the second law, we have to do more mathematical legwork. We have to check not only whether one inequality is true, but whether a whole family of inequalities is true. And this set of inequalities is sometimes called a set of second laws.

Paul: Fantastic, thank you. So I presume those inequalities have at least some relationship to Bell’s Inequalities.

Nicole: Bell’s Inequalities tell us about whether a phenomenon can be modeled as classical or whether it involves some physics that is truly non classical. These second laws of thermodynamics address thermodynamic type transformations and they actually apply to classical and to quantum systems. So there’s a little bit of a difference there though in both cases we do check inequalities.

Paul: Excellent, so let’s get on to the topic of time, which is the favorite subject of this audience. So first of all, I’m going to ask you, why does the time only run forwards and then that should lead on to I think you’re telling us about your work on quantum clocks.

Nicole: As an information theorist, I like to think about the flow of time in terms of information. To me, time flows in just one direction, because information tends to be dissipated and lost. For instance, we can think again, about that clump of scent molecules around or that is being emitted by a scone or has just been emitted by a scone. Again, the clump of molecules will start very near the scone, and eventually will spread out across the kitchen. If we watch a video of what’s happening in the kitchen, so that we can somehow see the molecules, even though usually just looking at a kitchen, we can’t, then we’ll know that time is flowing, because we see the molecules spreading out. Now why do the molecules spread out? Because after a while, the microstates microscopic configuration of all the particles will be well modeled, as though it were chosen totally randomly from all of the possible configurations of the molecules. And as people here have probably heard, there are a lot more configurations of the molecules consistent with the molecules being spread out all across the kitchen, so there are more ways to arrange the molecules all across the kitchen, than there are ways to arrange the molecules so that they stay clumped next to our scone.

So if you look at the kitchen at some later time, you’ll probably see a configuration that is selected totally randomly, most of the configurations are with the molecule spread all across the kitchen, and that’s why you’re probably see the molecules spread all across the kitchen.

Now, what does that have to do with the loss of information? If at a later time, the configuration of molecules that you see is going to be probably some totally random configuration of molecules, then the configuration you see is totally independent of how the molecule started off, which is clumped near the scone. Doesn’t matter whether the scone was initially in the middle of the kitchen, or in one corner or up near the ceiling, the initial conditions don’t matter our information about how the kitchen used to be doesn’t matter, that information gets dissipated. Now you can ask, why is that information dissipated?

If you look at the kitchen, after a while, you see a molecular configuration that is totally random. I don’t know if anybody has the answer to that question. I usually hear the answer pinned on chaos or ergodicity, which is more or less the same thing, but I ultimately think that is the mechanism behind how information is lost. And so I see information loss as very much bound up in the flow of time.

Paul: Beautiful answer, what great clarity you have on this. Let’s quickly bring the room to a reset here as people have joined us. It’s coming up on 9:30 am right now, which is halfway through our formal part of the presentation. So you are listening to our guest speaker Nicole Yunger Halpern. And Nicole is talking about her book, which I would recommend everyone read, it is a fantastic book called Quantum Steampunk. And the interesting things here is, I see a lot of English culture and food descriptions, and particularly around tea and scones. So for those of you who didn’t quite know what a scone was, Nicole is talking about an item of food in the English culture. All right.

Nicole: That’s very steampunk.

Paul: Yes and the the association with Victorian England as well comes over really beautifully. The book is an absolute delight to read. Not only is it scientifically and technically inspiring, and enlightening, it is also full of humor and interesting, intellectual perspectives; perspectives on some of the scientific concepts that we’re grappling with today in the nature of time and causality.

Okay, so let’s get back to my questions. How can the law of thermodynamics be broken, for example, with quantum physics?

Nicole: A natural question to ask, since the laws of thermodynamics started being codified in the early 1800s, is should we expect quantum systems to break them? I think it’s quite natural to believe that quantum systems might be able to break the second law of thermodynamics, but as far as I know, the second law is still going strong. However, there are some ways that you could bend around the second law of thermodynamics using quantum systems. For example, suppose that we have an engine. The engine draws its energy from an environment at a fixed high temperature, some high temperature, so this environment has a lot of energy that heats and flows from this environment to a colder environment. Some of the heat is picked off by the engine and converted into work, which is an ordered, coordinated useful form of energy, as opposed to heat which is very random. It’s a common model for an engine has the engine operating by using these two different environments at two different temperatures.

The French scientist Sadi Carnot established an upper bound on how efficiently an engine could operate, if it had access to just two heat baths at two different temperatures in thermal equilibrium. I should probably say a little bit about what I mean by equilibrium and thermal equilibrium. They are quite relevant to the nature of time. I talked about what I meant by equilibrium. This is a very quiet state of a system at least from a large scale perspective. So the systems temperature and pressure and so on large scale properties remain approximately constant. Thermal equilibrium is a specific equilibrium state. And we can think of it as a state in which the system has some well defined temperature and follow some mathematical model. Now we can say, suppose that we build a quantum engine. And suppose that this quantum engine also has access to two different environments to which we can ascribe two different temperatures. Can this quantum engine break Carnot’s bound, upper bound, on the efficiency with which engines can operate?

Nicole: Now people did propose a quantum engine that actually by now there I believe a few of these models that appear to break Carnot’s bound that our quantum systems. And these engines kind of look like they ought to obey Carnot’s theorem, because they are quantum systems and they interact with just two different environments to which we can ascribe two different temperatures. However, if we look closely, then we find that Carnot’s assumptions aren’t really being obeyed. These environments. are not in simple thermal equilibrium states even though they have temperatures. They’re actually in quantum states that we call squeeze states, which kind of play with a quantum uncertainty principle that people have probably heard of and if you’d like further explanation I’d be happy to provide one. So these engines rather look like they ought to obey Carnot’s Bound. They break Carnot’s Bound, so we might be surprised. But on closer inspection, we find, really, they just bend around Carnot’s Bound.

Paul: So when we talk about Carnot’s Bound, we’re talking about entropy always increasing. I find it interesting that the symbol we use is delta S. And as far as I can tell, in the historical record, the S stands for Sadi, do you have any other insight into where the the symbol s came from?

Nicole: That’s interesting. I had actually never heard that. I was always just taught in statistical physics class to use an S. And I should probably mention, since you brought us back to Carnot, that I might not have made explicit the relationship between Carnot’s theorem and the second law of thermodynamics. We can see Carnot’s limit as another manifestation of the second law of thermodynamics, akin to the total entropy of a closed isolated system only remains constant or increases.

Paul: Thank you. I’m glad you added that definition. So yes, I think, from what I remember, and I read this in Julian Barbour’s book, he basically said that you couldn’t use Delta E for entropy, because E was already used for energy. So they had to find another symbol. And then apparently, the S was chosen, but no one really knows why, but the thought is there was Sadi.

Nicole: There’s also H from Boltzmann’s H theorem that was used.

Paul: That’s right. So what do you think is the most important thermodynamic property of time?

Nicole: I find it very interesting that we can detect the flow of time, only if we’re not at thermal equilibrium. We said again, that a system of thermal equilibrium is in fairly quiet states, and it has a temperature. If we wanted to test the flow of time, we need something that’s not at thermal equilibrium.

Why? Basically, a clock has to be out of equilibrium. And it wasn’t immediately obvious to me, that’s a key defining property of a clock should be, it cannot be at thermal equilibrium. That’s not the first thing that would come to my mind. But if you think about it, it makes some sense. Because again, we can tell what time flows because the molecules in a kitchen spread out, because we age, because things flow toward thermal equilibrium over time. So if you have a system that is just at their thermal equilibrium, then its entropy is already maximized. And it’s not going to undergo any changes that signal the flow of time. So that’s why clocks need to be out of thermal equilibrium. And there’s some extra math that goes along with this, but that’s the basic physics.

Paul: Fascinating, we think it was less than a year ago, last December, we had Pauli Erker, from the University of Vienna talk to us about time, clocks and thermodynamics. And he said pretty much the same thing that really how thermodynamics connects to quantum clock and our ability to measure it’s fundamental limitations. So thank you for that, this is really insightful.

Nicole: He is doing great work.

Paul: He is doing great work, and so is everyone else at the University of Vienna, especially now that Anton has won the Nobel Prize. This is wonderful, I’m really delighted.

So let me go on to a couple of more questions here, forgive me if this takes us back to a couple of things that you may have answered, but I’m looking for some different perspectives, different, not perspectives that I happen to have, but but a different way of looking at things because I’m confused by them. So, tell us a bit more now about the the quantum aspect of thermodynamics, for example, what is it about thermodynamics which caused us to be stuck for so long in the age of steam engines in the second law, or as a direction of time?

Nicole: Part of the question was what is quantum about thermodynamics?

Paul: So I guess how does quantum come into the picture? So for example, how did the Victorian thermodynamic concepts of work and heat change when you bring quantum concepts into the picture?

Nicole: Yes, it’s fascinating to realize that in parts our understanding of quantum systems arose from thermodynamics, but also in part our understanding of quantum systems was hindered by thermodynamics. The problem was that thermodynamics worked really well, during the 1800s. And thermodynamics was originally thought of as involving just these large scale properties that I’ve kept talking about, such as energy and temperature and pressure and volume.

People could describe engines and refrigerators using just these large scale properties of materials or whole gases. When people didn’t need to think about any little particles that make up these materials. There were a few people in thermodynamics such as Ludwig Boltzmann and James Clark Maxwell, they posited that materials did consist of these little particles that were buzzing around. And the theory of atomism had been established pretty well by John Dalton, earlier. But even scientists didn’t, many scientists didn’t like Dalton’s theory of atoms.

Some of these scientists, including thermodynamicists were saying, look, we can describe so much of the world so well, using this theory of just large scale properties. It seems unscientific to posit that there are these itsy bitsy particles that no one can ever see. So if you’re doing science, or maybe they would have called it natural philosophy, then you shouldn’t be talking about these little particles. We are very fortunate that Boltzmann and James Clark Maxwell and so on, persisted and introduced the notion of particles behaving this way or that way into thermodynamics. They established the field of statistical mechanics, which can be seen as kind of pulling back the hood on thermodynamics and explaining in terms of particles, why large scale materials have the large scale properties that they do and behave in the large scale ways in which they do.

Also, in this, approximately turn of the century era, a key problem was discovered. And it was solved by Max Planck, it was called the blackbody radiation problem, it might be a bit too much of a tangent to get into the details of what that was. But it was perhaps the earliest and most important signal, at least a very early and very important signal that the world consisted of what we might miss, each of which has just some small definite, are not necessarily definite, let me use a different word, or some small amount of energy that we can think of like light is consisting of little packets of energy. And when Max Planck solved this important problem, he explained the large scale system in terms of these little packets of energy, he was really doing statistical mechanics. So statistical mechanics and thermodynamics have this rich and interesting interplay, even historically, with quantum theory.

Paul: Fascinating, this is really interesting. So leading on from that last question here. Why do you think the United States has taken so long to catch on to quantum thermodynamics?

Nicole: I talk a little in the book about how quantum thermodynamics has been a lot more developed in Europe especially, but also places such as Israel, Brazil and Singapore. I can’t say that I have collected data or done a rigorous academic study, I should add in the United States, until basically there have been a few quantum thermodynamics, this here and there. I’ve been one of them. During my PhD, I felt like I was often the only or almost the only American participating in these European quantum thermodynamics gatherings. Fortunately, over the past few years, the situation has been changing in the United States. People have been incorporating quantum thermodynamics into conferences and grant proposals, we at the University of Maryland just established the first quantum thermodynamics hub in the country.

Why has the US taken so long to catch on to quantum thermodynamics? Again, I don’t have a rigorous, well researched answer. But I rather suspect that the answer lies in an insight that was presented to me by Dave Kaiser, who is both a physicist and a historian of science at MIT. He pointed out that the development of physics in the United States during the 20th century was dominated by the Cold War, very practical considerations. And also, if we recognize that stereotypes and generalizations are not entirely accurate, with that caveat, we can also say that, historically, the United States in science and engineering has been characterized as a very practical minded, concerned with getting things done.

Quantum thermodynamics, in contrast, has a history of being very theoretical, very abstract, and almost philosophical minded, because people took the foundations, the laws of thermodynamics, and extended them, generalized them, recast them in an information theoretic language, or as in the United States, one might say that performing experiments, seeing data, it can be regarded as more important. Meanwhile, Europe has this amazing millennia old philosophical tradition.

So the kind of philosophical and mathematical abstract origins of quantum thermodynamics were well placed there. However, I think that the communities have been converging. Because of the rapid development of quantum information science, we now have great control in labs, over quantum systems, over atoms and photons, particles of light, and ions, so that’s really interesting quantum experiments are happening, though some of the quantum thermodynamics theory is making it into experiments. Also, if you publish enough high profile papers, then people will pay attention. So I think it’s positive improvements that quantum thermodynamics is now seen in the US increasingly as something exciting, an extra spice to add to one’s research agenda.

Paul: That’s amazing. And I’m delighted that you’re doing this because we’ve needed some revitalization of the concepts of thermodynamics for quite some time now. But I learned many things in your book, one of them that I didn’t know before was is how is it that qubits can have temperatures below absolute zero?

Nicole: As we think of absolute zero, as the ultimate limit on temperature, and it is the ultimate limit on temperature, that nothing can have a temperature lower than absolute zero. In fact, according to the third law of thermodynamics, no system can actually reach absolute zero by any finite process. So it might sound very strange to claim that quantum systems can have temperatures below absolute zero. The key is that counter intuitively, if a system is at a temperature below absolute zero, it’s actually really hot. It’s actually hotter than it would be at infinite temperature. All this is very counterintuitive. It basically stems from some kind of funny math. But suppose that you have a qubit, a basic unit of quantum information. A basic unit of quantum information is stored in a quantum system that has two possible energy levels. For instance, if you have an electron, then if you put it in a certain environment, then it’ll have two possible energy levels. You can imagine starting off your qubits at zero temperature that qubits will be in its lowest energy state. And you can imagine raising the temperature and raising the temperature. And suppose that you imagine measuring the qubits energy. If the temperature is higher then you have some probability of finding, loosely speaking, the qubit in its upper energy level. As you raise the temperature further, if you were to measure the energy, you’d have a higher probability of finding the qubit in its upper energy level. You can keep raising the temperature and keep imagining measuring the qubit’s energy until you get to the highest temperature you can possibly reach. In theory, that is a temperature of infinity.

Now the if you measure the qubits energy, you’ll have a probability one half of finding the qubits in its lower energy level and a probability one half a finding the qubit and its upper energy level. And why does a temperature of infinity correspond to just this half and half probability distribution. It’s because of how the math works out. But we can imagine pumping even more energy into the qubit. So if we were to measure the qubit’s energy, we would have a higher probability of finding the qubit and its upper energy level, than finding the qubit in its lower energy level. And, for instance, people can pump this extra energy into atoms using lasers. And we’ve already reached infinite temperature, what happens beyond infinite temperature when the qubit has even more energy. The qubit turns out to have an negative temperature, again, that’s how the math works out. And this ability to have a negative temperature is really available to a system whose number of configurations, let’s say, is bounded. We know that a system is called a quantum system, because the number of or if you ask how much energy does it have? The answer is only one of a few possible answers. In the case of a qubit there are only two possible answers. The energy is quantized comes only in discrete packets. If the number of such configurations or amounts of possible energy is finite, then a system can have a negative temperature. In contrast, you and I, we have so many atoms in us, we can we can be in this position, or that position or that position, we live in a continuous three dimensional space, you can approximate us as being able to have just about any amount of energy. Whereas quantum systems are much more limited, and that’s why they can have temperatures below absolute zero, ultimately.

Paul: Fascinating. Thank you. I learned something new today. I learn something new every time I open a new chapter in your book. This is wonderful. So there are some very interesting statements in your book, which maybe you can explain a little bit more. You said the information is the capacity to distinguish amongst alternatives.

Nicole: Yes. So information is such an abstract and general thing. How do we define it? I’ve heard two explanations, and one that I resonate with is this ability or, it’s something that’s needed in order to distinguish between alternatives. Suppose that we approach the situation that’s presented in my book as an example. Suppose that you are the gatekeeper of a secret society and I want to get in and I have to give you the right password. The password relates the information that I belong to the society rather than not belonging. What I give you is information. And that information enables you to distinguish, Do I belong or do I not belong?

Paul: Excellent. Now, the reason why this is curious to me is because if we can distinguish amongst alternatives, that suggests that we can see the alternatives and be able to pick them, like make a choice of some kind. Doesn’t that give us a kind of like a God’s Eye View of the universe that we can see everything? That seems to be odds that we can only see things that are that were entangled with perhaps. What confuses me is if we can choose amongst alternatives, what is it that we’re missing here in terms of free choice? Are we thinking we can see a God’s Eye View? Or is there something else going on here and how we can enumerate or distinguish these alternatives?

Nicole: We can’t see everything, so we don’t have an infinite amount of information. For instance, in the case of the secret society, I’m passing you one bit of information, one basic unit of information that enables you to distinguish between the two alternatives, she belongs to the society or she does not belong to the society. So I’ve passed you information, you can distinguish between alternatives, but there are lots of alternatives that you cannot distinguish between, for instance, based on the information I gave, you cannot distinguish which month of the year I was born in, or which year I was born in, or what color my eyes are. So we can distinguish some alternatives, but not all of them.

Paul: So this is good that because another piece of language out of your book here, which I liked, related to this, as you call it, an ingredient for distinguishing amongst alternatives.

Nicole: Yes, I have now… oh go ahead.

Paul: Okay. They now one of our favorite definitions I’ve used a lot in this clubroom is information is the answer to a yes/no question, which is a very common kind of viewpoint on… So there’s an issue here is like, if something is the answer to a yes, no question, then what is the damn question? And then how does that relate to knowledge?

Nicole: I see a bit of information as the answer to a yes or no question. The bit is the basic unit of information, at least as defined by information theorists and computer scientists. It turns out thermodynamicists actually like to use a different unit of information. But that’s kind of a different matter. So yes, I agree that the basic unit of information is the amount of information that you gain, if you have no idea of the answer to a yes or no question and then you learn the answer.

Paul: Well that’s beautiful, excellent, thank you. All right. Well, we’re almost at the top of the hour right now. I’d like to give the folks I have on stage, our moderators, an opportunity to ask their questions. And then for those of you who are listening in the audience, let me do a quick reset for the room. You are listening to the clubhouse room, It’s About Time, which is a place to discuss our evolving knowledge of the nature of time and causality for physicists, computer scientists, mathematicians, neuroscientists, philosophers and practicing engineers. And today we are honored to have Nicole Yunger Halpern join us from the University of Maryland. Nicole is a theoretical physicist at the National Institute of Standards and Technology. She is a fellow at the Joint Center for Quantum Information and Computer Science, and an adjunct assistant professor at the University of Maryland. Now, Nicole re-envisions 19th century thermodynamics for the 21st century in her book. Using Quantum Information Theory. She has dubbed this research, Quantum Steampunk, which is the title of her book, and I would recommend everyone get a copy of this. It’s an absolutely fascinating and delightful read after the Steampunk Generator of art and literature that juxtaposes Victorian settings with futuristic technologies. Nicole completed a PhD at Caltech, winning the International Ilya Prigogine Prize for Thermodynamics Theses. While a postdoctoral fellow at Harvard University, she won the International Quantum Technology Emerging Researcher award. Nicole is also the author of this book Quantum Steampunk. So I thoroughly recommend that everyone gets a copy of this book, and I will ask Melissa if she could put that link at the top of the page for us today. So if you have a bio and a photograph and we can tell that you’re a real person, we will bring you on the stage. When you come on the stage and you ask your questions, tell us who you are and where you are in the world, and be as concise as you can in asking your question for our speaker today. But as I am bringing people up on stage, if you wave, if you raise your hand, I will bring you up. I’m going to ask Andrei and Mark and Cami perhaps if they have any questions ?

Andrei: Thank you, Paul. And Hi, Nicole. So the question that I had, had to do with something that I heard you say on Sean Carroll’s podcast. At some point, both of you discussed that you can have energy transfers that are work like and those that are heat like, and that you both made an aside, that it’s kind of anthropomorphic, the definition of work. But it wasn’t the main topic of the discussion, so you moved on. But I was just kind of curious, if you could reflect on that, because for me, you know, like, a molecule is clearly heat, but for Maxwell’s demon who is hitching a ride on a helium atom, perhaps that’s a spaceship. So I was curious what your thoughts on that are?

Nicole: Sure, for background for many people who didn’t hear Sean Carroll’s podcast, there are two types of energy that can be transferred between systems, work and heat. I often think of or characterize work as useful coordinated energy that can be immediately harnessed to power a car, or push a rock up a hill, or charge a battery. Heat is random energy, it’s not directly useful in the same way. And we talked about how this characterization of heat and work distinguishes between the two, based on what some agents might wish to accomplish with that type of energy. I do think that thermodynamics is an operational theory, an agent based theory in which we think about agents who want to accomplish tasks like powering factories, and we ask how efficiently they can perform these tasks, using quantities such as entropies. However, I agree that it makes sense to characterize heat and work also in terms of something other than their usefulness to some agents, the agent can even be a molecular motor, doesn’t have to be conscious.

Yes, we do have a description of heat and work in terms of something other than an agent. Work is energy that is transferred as some parameter changes. And by parameter, I mean something like the volume of a container that is holding a gas, as a gas expands, suppose that you have a gas in a container that is kept by a piston. As the gas will expand, it will move the piston. And suppose that the piston is at the top of the container., and suppose that some weight is sitting on top of that piston, we can say that the gas is performing useful work because it is doing something good for us, it’s lifting this weight. But even if the weight isn’t there, we can say that the energy transmitted by the gas is work, because there’s this external parameter in the environment, the height of the piston that determines the volume of the gas, and this external parameter is changing as the gas expands. There are a number of such external parameters related to volume and basically the thermodynamic quantities that tend to grow as the size of the system grows, as those parameters change, we tend to say that the system is performing work rather than transmitting heat.

Andrei: Okay, thank you. So it is a bit depending on the perspective of the agent, basically, that’s the answer.

Nicole: So since we can define work in terms of these external parameters, these quantities that tend to grow as the size of the system grows, we can define work just in terms of those external parameters without making reference to an agent or a task.

Andrei: Okay, thank you very much.

Nicole: And in fact, that is, I think, the original textbook definition that I used for or that I learned for work. It was my go to definition and perspective has shifted a bit as I’ve gotten into research.

Paul: Excellent question. Thank you very much, Andrei. And a great answer. Nicole, this is fantastic. Thank you. So on my list, I have Mark Jeffrey, followed by Cami, and then IO who I invited to be back on the stage to ask his question. And then John, and then Josh, in that order. So Mark, you’ve been a previous speaker on this clubhouse and welcome back. Do you have a question for our speaker today?

Mark: Yeah, thanks, Paul. Yeah, yeah, I do. I find thermodynamics fascinating but also very hard, as you might guess my English accent I do understand scones, but I don’t understand entropy so much. So a question for you, Nicole, thank you so much for talking about thermodynamics, you don’t hear so much about it. It seems like it’s all about information, which I find fascinating. I’m particularly interested in computational physics, and particularly that there is being put forward by Stephen Wolfram and Jonathan Gorard about trying to find a fundamental theory of physics from hypergraphs. So I wonder if you could say anything about the intersection of thermodynamics, which is all about information and computational physics, how computation might feed into this, or how thermodynamics might feed into these theories of computational physics?

Nicole: Yes, I should say I’m not so familiar with the details of, for instance, Stephen Wolfram’s proposal, but the interplay between computation and thermodynamics is definitely of interest to a wide community. I think that I talked about it in chapter five of my book, and we can see the roots of it in scenarios such as Maxwell’s Demon, which was a paradox, proposed during the 1800s by the British scientist James Clark Maxwell, in order to challenge the second law of thermodynamics, by using the concept of information. Maxwell’s Demon has inspired a lot of work that, for instance, has led to Landauer’s principle. Rolf Landauer was an information scientist at IBM, he actually thought that quantum computers wouldn’t be able to work. So my community is also partially working against him. But he did put forth an idea that is very much embraced in what we sometimes call the thermodynamics of information. This notion is that anytime you perform an erasure, that is eliminate information by a logically irreversible operation, an operation that takes any of many possible values to just one value. If you have a chalkboard that’s full of information, and you erase it, then given the final state, you cannot recover the initial state, because there were many possible messages that could have been scribbled on the board, and when you have just the erased board, you don’t know which initial state… what the information that was eliminated originally was.

So anytime you perform information erasure, you have to spend thermodynamic work and you have to dissipate some entropy. So, information erasure has a thermodynamic cost. This concept, and he actually worked out that there should be a minimal cost, the least possible amount of work that you have to pay per bit of information erased, is Boltzmann’s constant, which is a constant of the universe, kind of like the charge of the electron times the temperature of the environments, times the logarithm of some mathematical factor. This Landauer principle has deep implications for computation. If we want to compute and compute and compute, we will eventually run out of scrap paper. The universe doesn’t have an infinite supply, so we’re going to have to erase some. So we just said that an erasure costs thermodynamic work, so computation itself has an intrinsic thermodynamic cost. What I see is one of the key ties between computation and thermodynamics.

Paul: What a great question, Mark. Thank you. And I hope you got it answered.

Mark: Yeah. That was a great answer. That’s all fascinating. Thank you, Nicole.

Paul: All right. Thank you. Next on my list. I have Cami. Do you have a question for our speaker today, Cami?

Cami: Yes, thank you so much, Paul, for consistently hosting one of the best clubs and rooms and conversations on clubhouse. It is an honor to hear you speak, Nicole. My question for you is based upon your incredible breadth and depth of knowledge in the realm of theoretical physics, and your publication on the history of physics and how it interplays with modern thermodynamics. I’d be interested in hearing your thoughts on how feasible and how soon can quantum computing in a commercialized basis where it doesn’t need to be stabilized for extreme cold temperatures can be achieved. Thank you.

Nicole: There’s a joke, but quantum computing is always 20 years out. So that’s partially my answer. But more rigorously… actually I should give a little bit of background. There are two types of quantum computers. There are quantum computers that can be programmed to solve any problem that a quantum computer can solve, and then can be reprogrammed to solve any other problem that a quantum computer can solve. These are called universal quantum computers. They’re also quantum simulators. These are special purpose machines, they can solve a certain class of problems.

The Quantum simulators have been around for a number of years, the quantum simulators that have been realized have up to a million particles. And they have had uses. They’ve taught us about transitions between different exotic phases, kind of analogous to a liquid and solid except strange quantum phases that we would not encounter in our kitchen and freezer. So quantum simulators have proved useful. They might not be at the point at which they could enable us to use them for materials engineering in industry. There’s a lot of work on building quantum computers that are universal. These are being made from many, many different types of materials. A lot of the best universal quantum computers that we have today consist of 10s of qubits, the basic units of quantum information, although IBM, within the past two weeks unveiled its latest model, which has quite a bit more than its earlier model. And there’s a thought that I think is reasonable, that these universal quantum computers, when they first become useful will be useful for solving chemistry problems, such as how do certain molecules, loosely speaking, work? What are some of their important properties that are too complicated for us to figure them out on classical computers.

Today’s quantum computers are relatively small and very, very subject to noise. In order to have a quantum computer solve, say one of the most famous problems relevant to cybersecurity, we would probably need a quantum computer of 100,000 qubits to a million qubits. These large scale quantum computers would have full error correction, that would get rid of the noise that is plaguing quantum computers today. So we are definitely in an early stage of quantum computing.

But people are slowly starting to implement error correction codes on very small scales. And large scale quantum error correction is very, very much in the roadmaps of people building quantum computers, just full scale quantum error correction is going to be hard. It’s going to take time and resources. I would guess, a few decades, but again until now, there has been the sense that quantum computing is always 20 years away. I do expect that if the interest and funding does not dry up, then we will achieve full scale quantum computers. But I do think it’s going to take a decent number of years.

I guess I should add, there’s this whole industry of developing algorithms for the small noisy quantum computers that we have now. These algorithms are more heuristic, instead of solving a problem exactly, and proving that quantum computers can exactly solve the problem well, they more give a sense that, that seems kind of reasonable that a quantum computer should probably be able to give a pretty good answer to this question. We can see how those problems turned out.

Cami: Thank you.

Paul: Thank you, Cami for an excellent question. Okay, so on my list, I have IO and trying to bring you up on the stage, but we don’t know what’s going on with bugs in the system. There you go again, let me just invite you as a speaker, and when you come up we will be able to move you here and have a question from you. In the meantime, I’ve got John, Josh, Jeffrey, and Kyle and then Liane so in that order, I’m going to ask you to tell us who you are, and where you are in the world and what your question is for our speaker today. So next I have John. John Mallinckrodt. Go ahead, please.

John: Yeah, thank you, Paul. Yeah. So I’m a retired professor of physics from Cal Poly Pomona down the road from from where Nicole went to school and Harvey Mudd as well. But I just wanted to say, and the main thing I came up to say was that that was just a wonderfully lucid, clearly presented discussion of entropy. I don’t think I’ve heard a better one. So I wanted mostly to say that, but just so that I didn’t sound like nothing more than a fanboy, I also had just a couple of quick comments and then one question. One of which is that I found a, early on in the Wordle craze there were some discussions of that, and talked about basing Wordle solvers on Shannon entropy and using them as a very nice educational tool about Shannon entropy. And I don’t know if Nicole has any comments on that. The other thing, so the other quick comment was, just I always told my students that the the second law of thermodynamics was basically fully responsible for how much fun it is to watch movies backward. And it’s wonderful to kind of think about the role of the second law of thermodynamics in that. And then finally, a question; you had touched on this. I saw some time ago a description of work and heat in quantum theory that I think you might have been alluding to, that I’m surprised doesn’t get more attention. It makes me wonder if it’s not entirely accurate. It seems to me that it very nicely distinguishes between work and heat in a way that is always a little fuzzy in classical mechanics, and that is, at its heart that work is a process that changes the quantum states of a system, whereas heat is a process that changes the occupation numbers of the fixed quantum states of a system. Is that fair enough to say kind of universally? Or are there exceptions? Why don’t we hear that more?

Nicole: First, thanks for the kind words, I’m glad that that explanation was useful for you. Regarding videos being played backward, yes, there is actually a really interesting instance of that. When I was at Caltech we had a physics colloquium presented by Gavin Crooks. I think it was Gavin who had this kind of ploy. Oh, Gavin Crooks is responsible for Crooks Theorem, which is described in my book, it is yet another one of these more refined second laws of thermodynamics that has been discovered recently. While people were filing into the room, he had playing on the screen a video, in which you could see people moving backward and things levitating upward instead of falling down. And we thought, oh, this is just one of the usual videos being played backwards, so we would all start thinking about the flow of time and how we can recognize when something is being played backward. But then if you looked more closely, you would see that there was one woman who was moving forward and doing things in the normal direction. And so it was this really strange and interesting juxtaposition. What we expected this ordinary message about recognizing the arrow of time was skewed in this really fun artistic way.

Okay, so about the question about work and heat. I talked about this a lot in chapter six of the book, which does involve definitions related to the ones that you related. Defining work in heat in a quantum setting is even more complicated than defining work in heat in the ordinary classical setting, in part because of the uncertainty principle, how if we try to measure heat then we need to measure energy, but measurements disturb quantum systems, so people have proposed many different definitions of heat and work in the quantum realm. There is a definition that states, anytime we change the Hamiltonian of the system, for people unfamiliar with the term Hamiltonian, it’s kind of the structure of the system plus its environment, something that helps determine its energy. If we change the Hamiltonian of the system we’re performing work, because we would change a Hamiltonian by for instance having a charge in, or let’s say an electron in a magnetic field we could be changing the magnetic field strength. And that change of the magnetic field strength, it is the change of an external parameter of the sort that I mentioned in response to the first of these questions. So we can think of that as work. Meanwhile, any change of the quantum state itself is heat. How come? Well imagine in an ordinary classical system that is exchanging heat with its environments, and coming to thermal equilibrium with its environment, so coming approaching to have the same temperature as its environment. If you you can imagine that this classical system is again, one of these many particle systems like gas in a kitchen. So there are too many particles for us to know all of their configurations. So we don’t know exactly how much energy it has. But we can imagine measuring the system’s energy at each of many instance throughout the process of thermalization coming to have this temperature. And as the system exchanges heat with its surroundings comes to have this well defined temperature, we’ll find that when we measure what’s the probability of getting this amount of energy out of our measurements and getting that amount of energy out of our measurement, these probabilities come to have a very specific form. And if we move from classical physics to quantum physics, and probability distributions are replaced with quantum states. And indeed, as a quantum system exchanges heat with its surroundings, the quantum state comes to have this very specific form. And so we often think of energy that changes the quantum states as quantum heat.

Paul: Excellent. Thank you. All right. So we’re all back online here. It’s 10:22. And we need to close down the formal part of the meeting at 1030, so I’d like to have some succinct questions from our folks who are on stage today. So, John, thanks very much for that question. And IO, I’m trying to bring you up as as a speaker and I keep pressing invite. I’m not sure what the bugs are in the system. Okay, so I’m gonna go to Josh. Josh followed by Kyle, and then Liane, and then Dr. Yang. So Josh, you’re next, tell us who you are, and where you are in the world and what your question is today, for our speaker.

Josh: Thanks, Paul. I’m Josh. I’m in Massachusetts. My family’s in the jewelry business, but I have an interest in space and quantum stuff. But my question was, do you ever use your quantum research to study the future of like propulsion for rockets or like possible Interstellar travel in the future?

Nicole: I have not studied that. I’ve mostly thought about things on a much, much smaller scale.

Paul: Kyle you’re up next. So Kyle, tell us who you are and where you are in the world and what your question is, Kyle.

Kyle: Sorry, I’ve just is in a noisy environment had to take some time to get over here. And my name is Kyle and I’m just on planet Earth. And I’ve been working through your papers and I noticed that there was one about the compaction of particles and so I looked at that, and I was thinking, it’s interesting that your lab or research group seems to be looking at also things that we seem to have thought that we’ve come to the most effective or efficient way of doing things, yet. It seems that obviously your work could go into improving some of these areas of life. I was just wondering, do a few things come to mind in which you’re looking at, or a few areas, which you are looking at improving that we’ve kind of assumed that we had the most effective way or efficient way of doing it, but that your work might might help us with that efficiency or a different way of thinking about a problem that we may have already thought we’ve had the solution to?

Nicole: Yeah, I’m especially interested in seeing if we can use quantum thermodynamic technologies in a useful way. People have defined quantum engines, refrigerators, ratchets and batteries. And they can work they have been realized experimentally. According to some metrics they can even perform better than classical counterparts, for instance, maybe perform work at a higher power. However, these quantum thermodynamic technologies are not very useful. For instance, to operate a quantum engine tend to need to bring it to very low temperatures, cooling a system refrigerating requires work. But the engine is very, very small. So it could only give us very small amounts of work back. I’m interested in finding scenarios in which we can naturally insert quantum thermal machine that would already be say at the right temperature, so that it could, it could give us an advantage, naturally, because of the nature of the environment. An example is an experiment that I’ve been collaborating on that was just completed at Chalmers University in Sweden, where using an autonomous quantum refrigerator. Autonomous means that this refrigerator doesn’t need to be driven by work, it can just have access to two different environments at two different temperatures, and it extracts its own work, by allowing heat to flow between the two environments. It siphons off some of that heat as work that powers it.

My experimental collaborators in Simone Gasparinetti’s lab, are interested in building a quantum computer from artificial atoms that needs to be at very low temperatures. And there are these refrigerators that are large classical refrigerators that are already used. In order to keep these artificial atoms at very low temperatures, these large classical refrigerators have many different layers. They’re like onions, the outer layer is hottest and the inner layers coldest. The quantum computer sits inside the coldest layer. After a computation ends, the qubits are used up, they’re full of entropy, and they need to be cooled down even more. But we put a quantum refrigerator inside of this giant classical refrigerator in the center where it’s already cold. So we don’t have to spend extra energy on cooling our quantum refrigerator. The quantum refrigerator ultimately will have access to not only this innermost layer of the fridge, but also an outer layer of the fridge which is hotter. So the the quantum refrigerator will have access to two environments at two different temperatures. It can, given two different environments at two different temperatures, power itself in order to cool down these used up qubits even more, to an even lower temperature than the lowest temperature imposed by the big classical refrigerator. So in this way, we’re trying to use a quantum thermal machine, an autonomous quantum refrigerator in order to accomplish better, a task, the resetting or extra cooling of qubits and a quantum computer in a new way.

Paul: I never heard of a quantum refrigerator before I learn something else today. Thank you very much. Wonderful. So I hope you got your question answered Kyle. Good. And so next on my list, I have Liane followed by Dr. Yang, then Jeffery, and then Ethan and Ali. So in that order, please, Liane, tell us who you are, where you are in the world and what your question is for our speaker today. Go ahead.

Liane: Hey, Paul, super interesting talk. She was very articulate and lucid. Learned a lot. Yeah, I am midway between Vancouver and Victoria on an island looking out at the North Shore mountains of Lake Tuber, and I’m a professor at the University of British Columbia. And let’s see, yeah, my question is, in 2012, there was a paper published in Psychological Review, called “Psychological Entropy”. And they developed a theory that when people are feeling anxiety, that there’s something that they don’t understand or can’t resolve that, that you can describe that as a kind of internal entropy. And they claim that that’s kind of a negative thing that makes people feel uneasy. But I think it can also be, I sort of wrote some papers sort of elaborating on that thing, that we can be the wellspring for creative thoughts. And I thought it’d be interesting to speculate a little bit about, you know, possible relationship between entropy, and as you speak about it, and this, you know, psychological interpretation of the concept of entropy and, and whether it might possibly relate to, you know, to quantum mechanical descriptions that have been used in the quantum cognition community. So, we’re not talking about quantum mechanics, as it, you know, refers to events at the level of micro particles, but they’re talking about you just using a generalization of the mathematics of quantum mechanics to describe what appears to be states of entanglement between combined concepts in your mind that can’t be described using any other kind of formal … or also describe decision making processes where you don’t have any opinion on the matter before you’re asked the question, and the asking of the question sort of collapses some opinion into your mind. Alright, so I’ll just leave it at that. And yeah, we’re curious to hear how you how you answer that. Thanks.

Nicole: That’s interesting. I should say I have no expertise other than being conscious human. No other expertise in psychology. So I really can’t comment. But thank you for the introduction to the topic.

Paul: Let’s move on to our other Questioners, but just before we do that, so just everyone should know that Liane Gabora has been a previous speaker in this clubhouse room and is a regular moderator helps us onstage here. So Liane, in the past, I think it was April, if you want to go back to the recording, talks about how our minds are able to recall past events in the absence of environmental cues and also weave them into meaningful narratives. And that allows us to imagine possible futures and how those pave the way for modern day conception of time.

So one of the things that I have learned, quite surprisingly, in my ignorance of psychology, is that by understanding neuroscience a little better, we think we understand why we are confused about time so much as human beings, because we construct these stories and that’s really been quite insightful to me in the past.

Next on my list, I have Jeffrey, tell us who you are and where you are in the world and what your question is today.

Jeffrey: I’m Jeffrey, from San Jose right now. I work at IBM Research. Thanks Nicole for being here today. I missed the first part, I don’t want you to repeat anything you already have, I was wondering if you could maybe speak on the importance of coherence as a thermodynamic resource, and what engineering feats have to realize practical applications from quantum thermodynamics.

Nicole: Okay, the first question is about coherence. First, I should provide some background about what coherence is. It’s easiest to explain mathematically, but I think of it as one of the wavelength properties of quantum systems. It’s pretty common to hear in characterizations of quantum theory. That’s according to quantum theory, every system has both particle like properties and wave like properties. And coherences, I think of as wave like properties of quantum systems. Coherences are very important, they are also very delicate, so suppose that we are trying to build a quantum computer as many teams are, including IBM’s quantum team, will have a bunch of particles, for instance, a bunch of atoms or IBM’s team has a bunch of artificial atoms. These atoms come to be in some very entangled highly correlated state. And the state has some wave like properties that we can think of as coherences. And the retention of the coherences in the quantum state is very important for the system’s continuing to exhibit quantum behaviors and work properly as a quantum computer. I think that the question was, if I recall correctly, what are some thermodynamic applications of quantum coherences, because as I said, quantum coherences are very important for information processing. For instance, the operation of a quantum computer. The coherences do have some thermodynamic applications. For instance, coherences are important for the application of the operation of quantum clocks. And clocks are thermodynamic because there is a very important relationship between energy and time, and thermodynamics is the study of energy, so, it was also the study of time.

I think of work and extracting work from a system like having an engine operate, and keeping time as complementary activities. Suppose that you want to extract work from a system and be able to predict very well how much work you’re going to be able to extract. Then you would like for your system to start off with a well defined energy, so that you can predict how much energy you can extract. But as we know, quantum systems, because of the quantum uncertainty principle, don’t always have well defined properties, so they don’t have to have well defined energies. Instead, a quantum system could be in a superposition of many different energies. It turns out that this superposition, which is wave-like and has a lot of coherence. That is a really useful states for a quantum clock to be in. So you can either have a quantum states that has a pretty well defined energy and is useful for extracting work from or you could have a quantum states that has lots of coherence, as defined through energy, and instead can be very useful for keeping time.

Paul: Thank you that what a really great answer that, so Jeffrey, did you get your question answered Jeffrey: Yes. All right, that’s good.

Paul: So Ethan, what is your question? Ethan: Yes. I’m interested in, I read your book and it was fantastic. So thank you very much for writing it.

Nicole: Thank you.

Ethan: Yeah, and one of the things that I was that interested me in particular was, I think, was maybe confusing your books, there was a little bit with Chiara Marlettto’s book. But I think you were speaking about in there about Robin Hood transfer functions in the environment and kind of connected to Jeffrey’s thing. I’m curious about the connection perhaps, or if you could speak on it between something like algorithmic cooling in a quantum computer, and I may make a misstatement, so please just cut me off if I’ve made something completely incorrect. But that connection from algorithmic cooling, decoherence via entanglement into the environment, is there a some kind of understanding that there is structure in the kind of Robin Hood transfers within the environment in relation to the information that would remain in the quantum computer? And secondarily connected to that, is there any structure in the environment when you have it being like a constant perturbation of these, let’s say algorithmic coolings, into the environment. Hopefully that made some sense. And my interest in this as sort of may be, if there is structure in the environment via this kind of perturbation, in the way that there’s interesting structure, like Prigogine showed in dissipative systems, then this might be sort of a proto-precursor to matching one part with the environment in a system that would kind of approximate the beginnings of evolution in terms of Darwinian kind of thinking, if there’s complex enough structure in the environment created by these possible perturbations. So hopefully, that made some sense if not, hopefully I was coherent, Thank you.

Nicole: First, you do have the right book. I do mentioned Robin Hood transfers. Robin Hood transfers show up in a model for thermalization. Suppose that we have a system, could be classical, could be quantum, and it’s undergoing some typical process, no work is being performed on it, it’s just maybe thermalizing. We can describe how it states is changing with a series of … a sequence of steps. And these steps are called Robin Hood transfers. You also mentioned algorithmic cooling. Algorithmic cooling is a process that is used for the same goal as the autonomous quantum refrigerator that I mentioned. We want quantum systems to be at low temperatures. If we have a bunch of qubits that have just finished a computation, then they’re used up they’re entropic, they’re noisy. We need to get rid of the entropy, we need to cool these qubits even further. There are many different ways to cool qubits. One way is with this quantum refrigerator I mentioned, another strategy is called algorithmic cooling. It’s described in one of the last chapters of the book. In algorithmic cooling, we say these qubits that I’ve just finished up a computation, they have some correlations between them. Let’s take advantage of the correlations and do computation on the qubits. To use the correlations to squish all the entropy and information in the qubits into just a few of the qubits. Those qubits are going to be really, really dirty and noisy and high temperature, but the rest of the qubits will be pretty clean.

So that’s the process of algorithmic cooling, which also lies at the intersection of thermodynamics and computation. There were also some questions about entanglement within an environment. Indeed, if you have a quantum system, then it’ll tend to develop entanglement or very strong, non-classical correlations with its environment. We tend to want to stop decoherence from happening, for instance, if we have a quantum computer. The quantum computer needs to be very well isolated.

Paul: Excellent, thank you. Really interesting. I’d never heard of Robin Hood transfers before I’d read your book, that was really a brilliant concept. And the sequence of steps is, is really insightful, it seems to me that each step has to have a kind of a quantum collapse before you can then correctly go on to the next step. Otherwise, the whole thing stays in quantum superposition, I assume?

Nicole: Robin Hood transfers, these are interesting. They are actually originally proposed in economics to describe the redistribution of wealth across a population. So as a matter of fact, it turns out that they can happen with a quantum systems or classical systems.

Paul: Fascinating. We’ve only got one more questioner, and then we’re going to close the room down, so you can get on with the rest of your day Nicole, thank you. So Ali, you’re less than I will list. So tell us who you are, and where you are in the world and what your question is today for our speaker. Go ahead, Ali.

Ali: Hello, Paul. Good to see you again. Nicole, I wanted to thank you very much for having this interesting topic and your time. And Paul, as always, you know, your fantastic fascinating rooms, and certainly a field for learning and contributing. So Nicole, my question is your take or your perspective in the operation of the quantum mechanics processes in the field of biology? In the last decade or so I’ve seen articles as well as people talking about, again, the operation of the quantum mechanics in biology, for example migration of the birds. So what are what is your take on this kind of crossing of quantum mechanics into biology?

Nicole: Sure. First, I think that pretty much all physicists would agree that finding effects of quantum phenomena, large scale effects of quantum phenomena on biology is difficult because biological systems are at high temperatures. When you’re a quantum physicist, and even room temperature is a high temperature, biological systems are watery and they’re large. All of these properties tend to cause decoherence to quantum systems and make quantum systems lose their special quantum properties and behaviors. So if quantum effects can be found in biology, then there might not be a whole lot of effects, and they’re probably very difficult to find. There are a few settings in which quantum effects may very well play an important role or at least a role in biology. You mentioned avian navigation. There’s an expectation that quantum effects may be influencing or helping enable birds to find their way, and indeed the functioning of the birds internal compass does depend on two quantum particles that are entangled. The exact role played by the entanglement according to my understanding is a little bit up in the air and needs to be nailed down. But there is what we might call a wave-like property of the state of the particles. And for those who are familiar with the terminology, the relative phase in the quantum state, that does seem to play an important role. Also, smell is another setting in which quantum effects have been proposed, also the transfer of energy in photosynthesis. So there are a number of possibilities, if anybody here is interested in learning more about quantum biology, then I recommend looking up materials by Clarice D. Aiello, is her last name. She is an assistant professor at the University of California, Los Angeles, and is a big proponent of rigorous approaches to quantum biology. So I recommend checking out her work.

Paul: That was absolutely fantastic. Thank you. Well, Ali, I hope you got your question answered. Ali: Yes, I certainly did. Nicole. Thank you. By the way, I just put the name Aiello in the chat for anyone who wants to follow up. Paul: Excellent. Thank you. alright so you have been listening to the clubhouse room, It’s About Time. And today, we’ve had the honor of having Nicole Yunger Halpern talk to us about the Victorian thermodynamics and the notion of entropy which quantum information science has now extended. So fusing those those modern and antiquated sciences, quantum thermodynamics is real life steampunk. And so we have been honored with this fantastic talk in this incredibly articulate description of notions of entropy and information. So I’d like everyone to open the microphone and and join me in giving a round of applause to Nicole for such a fabulous presentation. Thank you Nicole: Thank you very much. It’s really been a pleasure to be here. So thanks for all the questions.

Paul: You’re very welcome. Okay, so for everyone who’s listening, this book, The Quantum Steampunk book, I think is at the top of the page. Yes, if you click on that, thank you. The book is full of absolutely wonderful things, and I was particularly amused at all of the English and English culture and food. All these descriptions of scones and tea pots and Yorkshire puddings. It brings back lots of memories when I lived in England, I think now like over 30 years ago. And so what a delightful book and I thoroughly recommend it to absolutely everyone. And anyway, if you wanted to find out what the what tea and scones is like in England and how that applies to thermodynamics, please go and read Nicole’s book because it’s an absolute delight. Next week, we have Mark Van Raamsdonk who is at the University of British Columbia, I guess not far from Liane, who is also at that University. Mark is going to talk to us about how time emerges from entanglement. On the third of December, Donald Hoffman is going to come and talk to us about why spacetime is doomed. I think that’s his favorite phrase. And then on the 17th of December, we’re honored with Professor Lucien Hardy from the Perimeter Institute is going to talk to us about some of those topics related to time and causality. So look forward to to having those guest speakers in the next few weeks on to this clubhouse room. It’s About Time, I won’t close down the room immediately, but our guest speaker is is free to leave anytime she would like, if anyone has any questions or anyone on the stage who’d like to continue with any of the discussions, please feel free to do so.

Nicole: Thanks so much for hosting me. It’s really been a pleasure.