Summary of what happened in QGSS 2022
IBM has been conducting Qiskit Global Summer school since 2020, usually around mid-year. This year was no different and the focus this year was on Quantum Simulations. A few months ago when they announced about this event, I was in two minds whether to register for the event, mainly because I felt I don’t have enough background in Chemistry and nor does my work involve any Chemistry. Following my heart, I just decided to register and trust me, I enjoyed every bit of it. I learnt a lot! Many topics covered in the summer school are useful for any Quantum computing enthusiast. I also appreciate Quantum Chemistry a lot more now and we never know, many students, including me in the Summer school, might some day want to switch to doing something in this area of Quantum Computing. All thanks to how inspiring this event has been!!
One can think of this blog as my key takeaways and learnings from the summer school. I highly recommend those of us who couldn’t make it to the summer school this year, to watch the lectures once they are released publicly. I would also like to mention that each topic covered in the summer school deserve a separate blog or maybe, series of blogs. This blog is really just a summary. Maybe, I will write more blogs covering different topics in the future.
Day 1 started with exceptional lectures on History of Quantum Computing and Motivation for Quantum Simulation, by Olivia Lanes. She covered a lot of Quantum Computing basics. What I enjoyed most were her detailed explanations of the Stern-Gerlach experiment and the famous Double slit experiment. I also loved the famous Richard Feynman quotes on the slides, one of them being:
For me, before I attended the summer school, simulations only meant the standard numerical methods or something like the popular Monte Carlo simulations. It was very enlightening to understand how Quantum Simulations are different. She explained that a true simulation attempts to do exactly how the nature or a system in the nature behaves, whereas the numerical methods are approximations and that’s where Quantum Computers come into picture. She even provided an overview of Quantum hardware, time evolution etc. Overall, her lectures were truly motivating and indicative of what was to come in the summer school.
Days 2 and 3 were covered with really awesome lectures by Maria Violaris. On day 2, she covered basics of Quantum Computing with some mathematical rigor and that included single qubit gates, multi-qubit gates, bloch sphere, unitary and hermitian matrices, pure and mixed states, density operators, measurements etc. She also covered Quantum Teleportation, which is a very interesting topic. You could also go through these topics from the Qiskit Textbook and there are some articles written in QuantumGrad as well. Day 3 was extremely interesting. She started with the explanation of the most important Schrodinger’s equation:
One key component here is the H with a hat. That’s the notation for what is known as a Hamiltonian. Simply speaking, the Hamiltonian can be considered as a total sum of the Potential and Kinetic energies of the system in positional basis. This Hamiltonian turns out to be a Hermitian matrix. Hermitian matrices are diagonalizable and their eigenvalues are always pure real numbers (with no imaginary part). The eigenvectors or eigenstates are the discrete energy states of the system and the corresponding eigenvalues are the associated energy values. The state with the lowest eigenvalue is the ground state and the other states are all excited states. Maria derived the exact solution to the Schrodinger’s equation using properties of the exponential of a matrix:
She further showed mathematically that the above solution is a unitary. What was very amazing and good to learn was that she clearly showed when her derivations or proofs would not hold because of non-commutativity of the operators. We finally concluded that for every Hermitian, there exists a unitary matrix and for every unitary matrix, there exists a Hermitian matrix, which is clear from the solution of the Schrodinger’s equation. This fact is also explained well in the Proving Universality chapter of the Qiskit Textbook.
We can see that as the dimensions of the Hamiltonian matrix increases, it can get extremely difficult to simulate it classically. Note that for n qubits, the dimension of the Hamiltonian would be 2^n and that would mean solving 2^n equations simultaneously. So what do we need?: QUANTUM COMPUTER!!
Turns out that the exponential of the Hamiltonian is not always easy to calculate, mainly because of non-commutativity of operators. Remember, how easy it was for us to just say e^(a+b) = e^a * e^b or e^(ab) = e^(ba) in high school. Operators work differently and so, we have something called as Trotterization. Briefly, what Trotter formula says is that we can consider immeasurably small time intervals and for such time intervals, the useful properties of exponential would still hold and we can take product of all the exponentials at all the time intervals and just take limit of the time interval tending to being really small. This might have sounded confusing. So I will put it mathematically like below:
Maria also covered Quantum Phase estimation and complexity classes. For quantum phase estimation, we can go through Qiskit Textbook and there are also amazing lectures from QGSS 2020 that cover it. For complexity and the big O notation, I recommend this video.
I will come to Day 4 a little later. For now, let me tell you about Day 5. Day 5 was all about Noise in the Quantum Hardware and it was taught so brilliantly by Zlatko Minev. This is such a useful and important topic for everyone interested in Quantum Computing to know. He started with a very apt quote by Asher Peres, ‘Quantum phenomena do not occur in a Hilbert space, they occur in a laboratory.’ What this essentially means is that, in all the mathematical calculations we did in days 2 and 3, we assumed all the gates and all the matrices were perfect, but this is not really true in an actual experimental or lab set up. Zlatko started with basics of applying an X gate. He gave similar lectures in QGSS 2021, so I think I can afford to take some amazing pictures from those sessions that were also present in the lectures he gave this year. He mentioned on QGSS discord server that he might release a introductory paper on Quantum Noise and I really look forward to the final version of that. Coming back to the topic, he started with the following:
Recall that X gate basically flips state |0> to state |1> and state |1> to state |0>. |0> and |1> are eigenstates of the Pauli Z operator. This can be easily checked mathematically. To see the matrix representation of Pauli matrices, we could check this chapter from the Qiskit textbook.
An important concept here is the expectation with respect to an operator. For any operator K, the expectation is given by the notation, <K>. If we are specifically given a state |x>, then the expectation of the operator K with respect to the state |x> is given by <x|K|x>. Why is this the case? Remember, we said that the operator has discrete energy states that are actually eigenstates of the operator and the corresponding eigenvalues are energy values. What does measurement really mean? Suppose, we are seeing a point on the XY plane. The most intuitive way for us to describe the point, is in terms of its closeness with x-axis or y-axis, which in mathematical terms means projection of the point on x-axis or y-axis. A quantum state in general, is in a superposition of some basis states. When we measure it or when an observer sees it, it collapses into one of the basis states which is more digestible for us in terms of the classical world. If we think of the famous example of the Schrodinger’s cat, it might be both dead and alive in the quantum world, but we as observers can either see it dead or alive and not both classically. In the XY plane example, we can actually have different frames of references for measurements. Just that all the frames would have 2 orthogonal axes. Similarly, we can measure a state with respect to different operators. <x|K|x> in simple language means measuring |x> several times with respect to operator K. Each time we measure it, it would give us some eigenstate assuming |x> is in superposition of basis states or eigenstates of K and this eigenstate would have a corresponding eigenvalue. We would take the average of all the discrete eigenvalues we get through these several measurements and call that as the expectation of K w.r.t x. This average or expectation should ideally equal the value we get upon calculating <x|K|x>, but because of noise, we might not get the exact number. The fact that the ideal expectation value is <x|K|x> can seen by writing K as m1|m1><m1| + m2|m2><m2|+ m3|m3><m3|……..where m1, m2, m3 and so on, are eigenvalues of K and |m1>, |m2>, |m3> and so on, are eigenstates of K.
Coming back to Zlatko’s example, if after applying X gate(s), we measure the final state with respect to the Z operator, we would expect to get either <1|Z|1> = -1 or <0|Z|0>=+1. This is because X gate flips |0> and |1> as we mentioned earlier and |0> and |1> are eigenstates of Z with eigenvalues +1 and -1 respectively. So it should look like below with varying circuit depth (d):
But with a noise called as Coherent noise, caused by miscalibration of gates (either over-rotation or under-rotation), one would get:
The reality is even more different though, we see the actual results are:
The steady decay in the oscillations is caused by a noise known as incoherent noise. This is because of environmental and other external factors that lead to mixed states. Zlatko explained this really well.
There is another error caused by taking finite number of measurements. We can never really measure infinite times. The ideal value <x|K|x> assumes we are doing infinite sampling or measurements. This error is called projection noise.
Another noise that Zlatko explained was the readout noise/error or the state preparation and measurement noise. This is related to not having an ideal physical measurement process or a measurement apparatus. The reason for not having exact +1 and -1 at depth 0 is the readout noise.
The rest of the days were all about Quantum Simulation and Chemistry with wonderful and very insightful lectures by Jeffrey Cohn, Panos Barkoutsos, Yukio Kawashima, Leva Liepuoniute and Alexander Miessen. The Day 9 lectures were amazing and were on Quantum Dynamics by Alexander Miessen. It covered time evolution in even greater detail. In fact, there is a new module called Qiskit Dynamics that we should try exploring. This is the Github link.
There is a lot covered in these lectures and it is impossible to write all of it in a single blog, but I will briefly cover it here. There are also topics here that I still need to understand better.
A very simple picture of atom, as we might have seen in our high school, looks like:
and we might have learnt that electrons repel each other, protons repel each other, but electrons and protons attract each other (opposite charges attract, like charges repel) and that protons and neutrons are all together in the center in an area of atom referred to as nucleus. We also discussed above that Hamiltonian associated with a quantum system is basically the total potential energy and total kinetic energy with respect to positional basis. Now potential energy is associated to these attractions and repulsions between pairs of particles (electrons, protons, neutrons) and kinetic energies are associated with the movements of these particles. In nature, there are several chemical reactions that take place which involve reactions between different atoms and that form bonds and result in molecules. For eg: a water molecule is formed by reaction between a hydrogen molecule and an oxygen atom and a hydrogen molecule itself is formed by 2 hydrogen atoms. So for a molecule, we also need to consider inter atomic attractions and repulsions. Below is how the molecular Hamiltonian looks like:
Note that M denotes the mass of nuclei and m denotes mass of electrons. R denotes repulsive force between different nuclei and r is for repulsive forces between electrons and between electrons and nuclei. The subscripts A, B are for different nuclei and subscripts i,j are for different electrons. In the above, we are having several atoms forming a molecule, so the summation in each term is over all the nuclei and electrons across atoms. Now, we want to always make things simpler. The above Hamiltonian really looks complicated. Firstly, in atomic units, we can just consider some constants like h, e etc. in the above Hamiltonian to be 1. There is a well known approximation known as the Born-Oppenheimer Approximation, which is quite intuitive. You might have studied in high school that the mass of the nucleus is way more than that of electrons. The heavier an object, the less movement of that object one expects. So the kinetic energy associated with nuclei can be safely ignored and nuclei could be treated as classical particles which would make the middle term for nucleus-nucleus repulsion, a constant or just a fixed shift of energy. We get the following after approximations:
It is seen that in the above, the potential energy for attraction and repulsion overpowers the kinetic energy of the electrons. Therefore, the above can be considered as potential energy and it forms a potential energy surface depending on the nuclear coordinates:
In nature, everything strives for stability or low energy. So many single atoms of different elements cannot exist as it is without a reaction or forming a molecule because a molecule is more stable than a single atom. For some elements, if they are mixed, they cannot resist but react with each other because that’s more stable. So in the potential energy surface, the final product after reaction has the least energy and the transition state while reaction is taking place is most unstable. Many Chemistry problems strive to understand different reactions and this can help in a lot of real problems like drug discovery. That’s why one common problem we try to solve here is finding the ground state energy of molecules. There is one more approximation known as mean field approximation that says that the electron-electron repulsion term can be just considered as average of all electron-electron repulsions.
Electrons and atoms are all considered to be Fermions and fermions have an anti-symmetric property, which means that if we write a wavefunction corresponding to a quantum system, altering the positions of particles like electrons in the wavefunction would cause anti-symmetry. This is important to keep in mind when we write our Hamiltonian in different forms.
Hamiltonian is further written in the form of one-electron integrals plus two electron integrals (coulomb and exchange integrals) and further expressed in terms of annihilation and creation operators (fermionic hamiltonian).
This is all fine, but how do we now tell a Quantum Computer that this is our Hamiltonian? There are several mapping techniques that convert the Hamiltonian into an expression of Pauli operators and that our Quantum Computer can definitely understand. One such technique is the Jordan-Wigner mapping.
Now, the task is to find the ground state energy. We do this with a Variational Quantum Algorithm and one of the most popular algorithms is the Variational Quantum Eigensolver (VQE). Before moving on, the Hamiltonian can be simplified further by only considering bond creation or bond breaking particles and orbitals and ignoring things that don’t really contribute to the chemical reaction. This is a topic of active research. Like any variation quantum algorithm, we first need to think of an initial state. This is another topic of research since different initial states would lead to convergence differently (this is analogous to how we think of initial parameters in Machine Learning). VQE involves a parameterized circuit (ansatz) and with the initial set of parameters, we get an initial state. Suppose this state is |x> and our Hamiltonian is H. The energy value would be <x|H|x>(calculated through several measurements or shots), this value would be passed on to a classical optimizer which would tweak the values of the parameters to get a new state, the energy value would be calculated again through measurements and depending on whether the energy value has increased or decreased, the classical optimizer would learn how to tweak the parameters. The goal is to finally converge to a minimum energy value that would be our ground state energy value.
That’s all for the lectures. As I said, this is summary of 2 weeks of lectures and every topic deserves a separate blog.
I can’t finish this blog without talking about Lab exercises. That was such a beautiful learning experience and I was super happy to see my dashboard with all the ticks :). Getting to know other quantum enthusiasts was also a very great experience. Many among us also became Qiskit Advocates the same week. :).
I can’t thank IBM Quantum enough for such a wonderful experience. Looking forward to all future IBM Quantum events.