Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

**Description:** In this lecture, the professor discussed Feyman diagrams for light-atom interactions.

**Instructor:** Wolfgang Ketterle

Lecture 9: Diagrams for lig...

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: Before I continue with the material, I want to show you at least the title of a recent paper in *Nature*, because it's related to material we have covered in this course. It's about the Kerr effect, the effect that one photon can create a phase shift for another photon.

And one goal, of course, for quantum computation where things are about single photons is to have a single-photon Kerr effect that one photon can change the phase of the other photon in a strong, noticeable way. So maybe one photon should create a phase shift on the order of pi. And this was reported here in this paper.

Of course, the non-linearity created by nonlinear crystals is much too weak for that. But what those authors did is they used microwave photos, had microwave photons into cavities. And they were coupled through a sapphire substrate with a Josephson junction.

So the non-linearity here is the non-linearity of a Josephson junction, which is actually realized with a superconducting qubit. I can't explain you many more details, but I just thought it sort of cool to see how the Kerr effect, which we discussed and which we discussed for single-photon, is realized, at least in the microwave domain.

And also, just sort of to illustrate that I hope this course enables you to read recent research papers, what those people measured is a [? key ?] representation. This is a coherent state. And then they showed, and this is the subject of the paper, that the coherent state which has a well-defined phase, lost its phase through the Kerr medium.

And you clearly see there is a big phase uncertainty. But then after certain time-- this is experiment and this is simulation-- there is a re-phasing, and the phase is back. There is a revival of the coherent state. All right.

Now I want to address one question which Cody asked about the g2 function and fluctuations of single-mode light. Let me just summarize. I told you that if you do the thermodynamics of a single mode, we find Bose-Einstein distribution of photons. And we have a thermal distribution. And a thermal distribution means sometimes we have more photons, sometimes we have less photons, depending on the thermal distribution.

And when we calculated what the intensity fluctuations were, we found they're characterized by a g2 function of 2. OK.

Then a little bit later in this course-- actually, just this week-- I told you that single-mode light always has a g2 function of one. And what I meant here is the following, rather trivial. If you have a single mode, that means that you [? align ?] [? it ?] is simply e to the i omega t, the intensity is constant.

There are no intensity fluctuations. And also, because everything is sort of predictable, it's just one wave. The Gn function factorizes into-- can be re-expressed by the g1 function. So therefore, this is the most trivial case.

But the question now is, how do we reconcile those two statements, that a single mode, e to the i omega t, does not have intensity fluctuations? Therefore, is a g2 function of one. And our earlier treatment about single-mode black-body radiation.

And of course, the answer is, what is the single mode in one context is different from the single mode in the other context. Maybe let me explain that. Let's just create an ensemble of cavities.

We put them in thermal contact with a reservoir. And then we break the thermal contact with a reservoir. Each cavity has now a perfect single mode, e to the i omega t. But each cavity is filled with a different photon number according to the thermal statistics.

So therefore, if you just look at one cavity, we find no intensity fluctuations. The g2 function is one. But if you extend the ensemble average over all the different cavities, we find that there are fluctuations in intensity.

Well, we can now keep that in mind. But now we can say, well, let's just take one cavity which is weakly coupled to a thermal reservoir. And instead of looking at the ensemble average of many cavities, we look at the long-time average of this one cavity. And what will happen is thermal photons will be created, will disappear, and such. So now this one cavity fluctuates.

But technically, what that means now is it means that the sharp mode of the cavity is interacting with the environment, and it becomes broadened. It has a broadening delta omega. And this can be regarded as that we mix in modes of the environment.

So in that case, strictly speaking, it's no longer a single-mode cavity. So you have to consider those things. And depending what point of view you want to take, you get the different result. Other questions?

Then let me ask you a question. Last class, I explained to you-- well, at least, tried to explain to you that g2 function for bosons and fermions with the counting statistics, with permutations and such. I wasn't sure, at least from one question I got, whether this was completely clear. Do you have any question about that? [? Teroy. ?]

AUDIENCE: This seems very obvious. But during class, I was trying to something with thermal state. What is the definition of our thermal state in terms of any basis, just generally speaking? We write it as-- I thought it'd be something like e to the minus beta light Hamiltonian.

PROFESSOR: Yeah. So our definition of the thermal state-- when we had thermal light, we say that the statistical operator is given by that. And H is the Hamiltonian for a single-mode light, which is n plus 1/2 nn. Well, with suitable parentheses and summations. Other questions? OK.

Then let's get to the main subject we want to discuss now. And these are actually Feynman diagrams. I wanted to give you an exact definition and a deep understanding, what does it mean when we talk about processes of absorption and emission, but also about absorption, emission processes which violate energy.

And some people refer to them as virtual photons. The reason is that virtual photons cannot really exist for a long time. When you emit a virtual photon, another photon has to be absorbed immediately to reconcile energy conservation, as we want to see in a moment.

So the goal of this presentation is I want you that you realize that each of those doodles has an exact mathematical meaning. Each of those diagrams represents one term, or a class of terms, in an exact solution for the time evolution of the system.

So in other words, if you would ask me, we have a ground and excited state. Is it possible that the ground state emits a photon, goes to a virtual state, emits another photon of another frequency, and then somehow absorbs the photon, goes to here, and eventually takes another photon, and is back to the ground state. Is that a possibility? Can that happen?

And I think what the message here is yes, everything happens. The system is trying out all of its possibility. And the two time evolution is the sum of all those possibilities, of all the amplitudes related to those diagrams.

But what I want to show you is how sort of the weirder the diagrams get, the more you go in energy below the ground state, the more you go away from real atomic states, the bigger is your energy denominator. And that means those diagrams have a smaller and smaller weight. And in all practical calculations, we neglect those.

But I want you sort of to be able to see that and realize, I exactly know what it means. It means this and this term in a summation over all the amplitudes which the quantum system is exploring.

And I think with that, we really learn something about physics. We learn about what is actually inside the Schrodinger equation. A lot of people actually, before they take this class, think that this is just nonsense, that this has no physical reality. But I hope after this class, you see that pretty much everything you draw has physical reality. It's just-- it may not contribute a lot.

So what we have done-- and let me just start here and invite your questions. We have figured out how an initial state evolves with a time evolution operator to another basis state, toward the final state. And formally, this is the formal solution and exact solution of the Schrodinger equation.

We have to sum over all orders, orders in the interaction. Well, I will immediately tell you what is the first and second order. We are not going much higher. But if you want, here, you can. And then what you have to do is in the time evolution, you have to sum over intermediate times. You have to allow the system to propagate, to change its state, propagate again, change its state again.

And the times where the change of state happens, that can happen at any time between your initial and your final time. And we integrate over all possible times.

And I showed you-- and I think this was the very last thing we did on Wednesday. I showed you how this diagram can be translated into mathematical equation. And I think I picked the second order diagram.

But I think from the way how I presented should be clear now how any such diagram can be translated into an equation. And eventually, you have to perform an integral over all intermediate times. And this is part of the time evolution of the system. Questions about that?

AUDIENCE: We allowed these to be [INAUDIBLE] right? Because we haven't done anything with that?

PROFESSOR: Good question. I assume that it's time-independent. Actually, right now, I assume, just to assume something, that it's a dipole interaction.

My understanding is if it has an explicit time dependence, it would just appear there, and it would not change the-- would it? Wait, would it?

Actually, when be derived the differential equation for the time evolution operator, did we assume that v was time-independent or not? I don't think we did. We integrate over time. I think the formal solution.

Remember, I wrote down the differential equation for the time evolution operator and then say, this is a formal solution. My gut feeling is nothing changes when v is time-independent, but this one step should be confirmed. Other questions? OK.

Then let me just spend a few minutes on connecting what we have done with standard first and second [? auto-perturbation ?] theory. I want to sort of throw a few definitions at you. S-matrix, T-matrix.

But I'm not really going into any details. I just want to sort of wrap up the perturbative treatment by connecting it with the standard first and second [? auto-perturbation ?] theory.

But after that in a few minutes, I want to have a discussion about the nature of intermediate and so-called virtual state, and then talk about the interaction v, whether it's d, dot, e; or the p minus e interaction. OK.

So far, I've presented the formalism that we started in initial time and ended at final time. But usually, these are microscopic times. And in the experiment, we observe a system for macroscopic time. So for that purpose, we usually go to the limit that initial and final times are infinitely apart.

And that actually means we have energy conservation. The initial and final energy has to be the same. And that can be, for instance, even if you restrict ourselves to second order, remember we had all those propagators, e to the i energy over H bar times t. And when we integrated over long times, it will just average out to zero unless the initial and final energy are the same.

And technically, you have seen that in undergraduate derivation second order perturbation theory. You integrate the exponential function. And eventually, for sufficiently long time, capital T is the difference between initial and final times. It approaches the delta function. So that's how energy conservation comes in.

So the fact that we have energy conservation is then used to define s and T-matrix. The transition amplitude from the initial to the final state, what we have just calculated and discussed, is called the S-matrix. It's just how it is called.

In first order, of course, the time evolution is the unitary matrix. So therefore, we get [? conical ?] delta. And then we discussed that in the limit of large times, we have a delta function.

So therefore, if we take the S-matrix, which is the transition amplitudes we have calculated, and sort of take out of the S-matrix, the unity matrix, and factor out the delta function, then what is left is the so-called T-matrix.

When we talk about transition amplitude, transition probabilities, we are asking, what is the probability that the system has gone from an initial state, maybe the ground-- the excited state to the ground state through spontaneous emission? Well, probability is an amplitude squared. So we take the matrix element of the S-matrix and square it.

And from the line above, this is now involving the matrix element of the T-matrix squared. There's a delta function which becomes a delta function squared. But if you integrate over all final states, that's-- I mean, a delta function is always requiring that you do some integration later. Otherwise, the delta function doesn't make sense. The delta function squared.

You can actually explicitly see from the sine function above. The delta function squared just turns into the time t. So therefore, if you divide the probability by the time, we have our transition probability. We have our transition rate.

And what we obtain is the second order expression for the transition rate, which is essentially Fermi's Golden Rule. So anyway, this is just finishing the formal derivation.

But now I want to discuss the nature of those intermediate states. And maybe what you should have in mind-- the intermediate state, which comes about when we have the system in the ground state, it emits a photon and goes down to this intermediate state, or this weird state which seems to be lower in energy than the ground state.

Well, what happens is those intermediate states, when they appear after the vertex, they propagate with the energy. But if their energy is less than the initial energy, here, the energy-- the intermediate state k has a phase factor in its propagation which is determined by the difference of its energy with the initial energy.

So when we violate energy conservation in intermediate state, delta Ek is non-zero, and it is the larger, the more we violate energy conservation.

And then in the solution of the time evolution, we have to integrate over all intermediate times. So what we have here is we have an oscillating phase vector. And when we integrate it, when we integrate something oscillating over longer than an oscillation period, it averages out to zero.

So therefore, those intermediate states which are off the energy shell, which seem to violate energy, can only noticeably contribute over a duration which is 1 over the energy defect, delta Ek.

So it is correct to say that the system in its time evolution for short times can, so to speak, violate energy protected by Heisenberg's uncertainty relation.

Or you can say, the system can do whatever it wants. It can spontaneously create 10 photons. But this is pretty much like taking money, taking a deposit out of Heisenberg's bank. And after very, very short time, you have to pay it back by having another process which brings you back to the correct energy.

I should put it under quotation marks when I said we violate energy, because energy is not sort of defined or measurable in this intermediate times. We start a process with a quantum system, we look what happens afterwards.

And whenever we assess energy, it is assessed when the final time is much, much larger than the initial time. And I just showed you that eventually, the system has to be back to a final energy, Ef, which is identical to the initial time, Ei.

So in other words, I just want you to keep it in mind. I've proven to you energy conservation in the limit that t final minus t initial is large. And when I'm now talking about non-conservation of energy, I do that in quotation marks, because we know energy is conserved in the end.

It's just that for very, very short times in the time evolution of the system, there appear virtual states which seem to violate energy. But you can think about it in that way. Questions about that?

Finally, let me now address the question, is everything I'm explaining to you really happening? Is it really happening in a physical system? Well, the first answer is, I wouldn't tell you about it if it had no reality. So yes. You can imagine that this is what your atom is doing.

You can imagine that the hydrogen atom that's [INAUDIBLE] [? shift ?] is permanently emitting and absorbing photons and all sorts of weird photons from the ground state, lower, up, and back again, and such. Yes.

This is the way how we derive some of the most precise predictions in physics, namely QED. However, we can represent systems in different gauges, in different representations. And we had discussed earlier that we often take the dipole representation for the light atom interaction. But there's also the p minus a representation.

And if you look at the two, both the e, dot, d and the a, dot, p interaction are the product of something which creates and annihilates single photons-- a plus a [? dega. ?] And then the p operator or the d operator connect the ground with the excited state. So those matrix element would tell you, you can only emit and absorb a photon when you go from the ground to the excited state.

However, in the p minus a representation, we also have the a squared term. And the a squared term allows you-- because there is no atomic operator in front of it-- allows you to scatter two photons without changing the quantum state of the atom.

But this does not contradict anything. You can take either approach. And in your homework, you will actually do that when you calculate Rayleigh scattering and Thomson scattering. And you will find out that the two approaches give identical results.

So if you know ask the philosophical question, can an atom scatter a photon without changing its quantum state, well, the answer to this question is actually gauge-dependent.

But maybe to lift away some of the confusion, one can, of course, also say, the a squared term is really important. And it's a simple description of Thomson scattering. Thomson scattering is about photons which have much, much more energy than the energy difference between the ground and excited state.

So therefore, if you want to completely describe the system-- and you can, it's just more complicated-- with a dipole approximation, we have to scatter a photon. But because the photon has so much more energy, it's so far away from resonance, this intermediate state has a huge energy defect.

And that means what I just explained to you-- the system wants to immediately pay back the money to Heisenberg's bank. So it will remain in this state only for very, very short time. And now the other gauge tells you, this very, very short time can also be zero.

So see, it's not as dramatically different as you might assume. But this is just something which is common that, if you have different representations, the physics tastes different, but you always have to remind you that when you calculate an observable result, the two different gauges must exactly agree. Questions? All right.

So this is all I wanted to say about the diagrammatic solution for the time evolution of a quantum system. But I want to use it now. I think this is interesting stuff. It's a powerful method. And I want to illustrate to you how this method can be used in the next two sections.

Our next section is van der Waal's interactions. So the chapter on van der Waals interaction is quite interesting for the following reason.

It really tells us something about the vacuum. I think modern physics has come to the conclusion that the vacuum is one of the most interesting subjects to study, because the vacuum is alive. It's filled by virtual photons, virtual particles.

And as we know now, by a condensate of Higgs boson. So there is a lot of stuff, a lot of structure, a lot of phenomena associated in the vacuum.

And the subject of van der Waals interactions is it's a nice way to talk about the vacuum, but it's nice also for the following reason. There is a completely, I would say, semi-classical way. Just use the Schrodinger equation and calculate what is the Van der Waals interaction between two atoms.

So you can just calculate in perturbation theory. And in your whole calculation, you never use the quantized electromagnetic field. Photons never appear. You just use perturbation theory. And you do that in your homework. It's rather straightforward. And you could have Van der Waals interaction.

On the other hand, I started to derive for you the theory of the quantized electromagnetic field by saying, we really look at everything. We have charges, and we have the electric and magnetic field. And we described everything.

And the way how we separate it is we had this Hamiltonian H naught, which gives us the atom. And the longitudinal Coulomb field became part of it. So this is the degree of freedom of the atom. And we have H naught for the other atom.

And then we had radiation field and the coupling to the atom described by our operator V, which can be the dipole operator or can be written down in other gauge.

But what I'm telling you is the way how we have fundamentally divided the world into atoms, atom's neutral objects, and the rest is interactions with the radiation field. That should tell you that the Van der Waals interaction between two atoms must have a description in QED where the Van der Waals indirection between atoms comes from the exchange or photons.

So there are two very simple pictures at the same physics. One is you don't even know that they're photons, you just do perturbation theory. But in a more comprehensive description where we include the photons, you should be able to understand the Van der Waals action between two atoms due to the exchange of virtual photons.

In other words, one atom is in the ground state, emits a photon that's going down from the ground state. But until now, if you have only one atom, it has to reabsorb the photon again and be back in the ground state. Otherwise, energy would be violated. But if you have two atoms, one atom can emit a photon, and the other atom can absorb it. And then the other atom can send a photon back.

And if we consider that process, we will actually find the Van der Waals interaction.

So I hope this is showing interesting physics from two angles-- that something which looked maybe trivial a long time ago now looks much richer, because those forces are really mediated by virtual photon pairs. So that's sort of the discussion I want to go through.

There is another aspect to it. And this is when we go from the Van der Waals force to the Casimir force. The Casimir force has one exact derivation, which I want to share with you, which relates the Casimir force to the vacuum fluctuations of the electromagnetic field.

So eventually, for those forces between two metal plates-- a neutral atom in the plate, two neutral atoms-- we will have three different pictures. One is we use the semi-classical dipole field as a perturbation operator. You don't think about it. It's trivial, and you check it off in your homework.

The second one is you look at the exchange of virtual photons. And the third one is you only look at the zero-point fluctuations of the electromagnetic field. What is real here? What causes it? We'll see.

## Welcome!

This is one of over 2,400 courses on OCW. Explore materials for this course in the pages linked along the left.

**MIT OpenCourseWare** is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

**No enrollment or registration.** Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.

**Knowledge is your reward.** Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.

**Made for sharing**. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)

Learn more at Get Started with MIT OpenCourseWare