Prerequisite Fourier series of multivariate function
, subspace
definition
We introduce the concept of tensor product with two-dimensional Fourier series. We know that the Fourier series expansion of a one-dimensional function can be regarded as the expansion of a vector on a set of orthogonal normalized basis. Naturally, we also hope to understand the two-dimensional Fourier series from the perspective of the vector space. Series expansion. For the convenience of discussion, we only discuss functions that can be expanded with finite-term Fourier series.
If the directions of $x$ and $y$ take $N_x$ and $N_y$ function basis respectively, then there are $N_xN_y$ basis in the two-dimensional Fourier series, that is, the function to be expanded is a vector in a $N_xN_y$ dimensional space.
Observing the function basis in the two-dimensional Fourier series, we can find that each basis is the product of a $x$ basis in a $N_x$ dimensional space and a $y$ basis in a $N_y$ dimensional space (multiplication of two unary functions) Becomes a binary function). From a vector point of view, this is a vector multiplication that has not been seen before. It is neither a number multiplication (Get a vector of the same dimension) not an inner product (Get a scalar), but multiply the vectors in two different vector spaces to obtain a vector in a higher-dimensional space, and make the dimension of the new vector equal to the multiplication of the dimensions of the first two vectors. We call such vector multiplication tensor product, denoted as
\begin{equation}
\left\lvert v \right\rangle = \left\lvert x \right\rangle \otimes \left\lvert y \right\rangle
\end{equation}
We call the space where the vector obtained by the tensor product is located
tensor product space. For the convenience of writing, we often omit the tensor product symbol and record it as $ \left\lvert x \right\rangle \left\lvert y \right\rangle $, or record the result of the tensor product as a whole $ \left\lvert v \right\rangle = \left\lvert x, y \right\rangle $.
We set the bases in the two one-dimensional Fourier transform spaces as $\{ \left\lvert x_i \right\rangle \}$ and $\{ \left\lvert y_j \right\rangle \}$ respectively, then we define the tensor product space as the base $\{ \left\lvert x_i \right\rangle \left\lvert y_j \right\rangle \}$ Zhangcheng space, and any vector in it can be expanded with the base.
\begin{equation}
\left\lvert v \right\rangle = \sum_{i,j} C_{ij} \left\lvert x_i, y_j \right\rangle
\end{equation}
Define that the tensor product satisfies the associative law, that is to choose any vector $ \left\lvert x \right\rangle $ and $ \left\lvert y \right\rangle $ in the two low-dimensional spaces, and they can all be expanded in the basis of their respective spaces, then their tensor product is
\begin{equation}
\left\lvert x \right\rangle \left\lvert y \right\rangle = \sum_i a_i \left\lvert x_i \right\rangle \sum_j b_j \left\lvert y_j \right\rangle
= \sum_{i,j} a_i b_j \left\lvert x_i, y_j \right\rangle
\end{equation}
Note that any vector in the tensor product space may not necessarily be represented as a tensor product operation, such as $ \left\lvert x_1 \right\rangle \left\lvert y_1 \right\rangle + 2 \left\lvert x_2 \right\rangle \left\lvert y_3 \right\rangle $. It can also be compared to the case of a function. $f_x(x)f_y(y)$ can be recorded as $f(x, y)$, but $f(x, y)$ may not be decomposed into $f_x(x)f_y(y)$.
The basis of any vector space needs to have a fixed order, and once it is determined, it cannot be changed in all calculations. Tensor product bases are usually sorted in the following two ways1
\begin{equation}
\left\{ \left\lvert x_1, y_1 \right\rangle , \left\lvert x_1, y_2 \right\rangle \dots \left\lvert x_2, y_1 \right\rangle , \left\lvert x_2, y_2 \right\rangle \dots \right\}
\end{equation}
\begin{equation}
\left\{ \left\lvert x_1, y_1 \right\rangle , \left\lvert x_2, y_1 \right\rangle \dots \left\lvert x_1, y_2 \right\rangle , \left\lvert x_2, y_2 \right\rangle \dots \right\}
\end{equation}
After sorting, we can use single subscripts to distinguish different bases, let
\begin{equation}
\left\lvert x, y \right\rangle _\alpha = \left\lvert x_i, y_j \right\rangle
\end{equation}
Corresponding to two sorts, respectively
\begin{equation}
\alpha = N_y (i-1) + j
\quad \text{或} \quad
\alpha = i + N_x (j-1)
\qquad
(1 \leqslant \alpha \leqslant N_xN_y)
\end{equation}
Inner product of vectors
If we need to discuss the concepts of modular length and orthogonality in tensor product space, we must first define inner product: the inner part of two vectors in tensor product space (assuming that they can be expressed as a single tensor product) The product is equal to the inner product of each corresponding vector in the low-dimensional space and then multiplying. That is, the inner product of $ \left\lvert c \right\rangle \left\lvert d \right\rangle $ and $ \left\lvert a \right\rangle \left\lvert b \right\rangle $ is
\begin{equation}
\left( \left\langle d \right\rvert \left\langle c \right\rvert \right) \left( \left\lvert a \right\rangle \left\lvert b \right\rangle \right)
= \left\langle d \right\rvert \left\langle c \middle| a \right\rangle \left\lvert b \right\rangle
= \left\langle c \middle| a \right\rangle \left\langle d \middle| b \right\rangle
\end{equation}
Note that the Hermitian conjugateRecord it as $ \left\langle d \right\rvert \left\langle c \right\rvert $ instead of $ \left\langle c \right\rvert \left\langle d \right\rvert $. In this way, it is easy to see that the $ \left\langle c \middle| a \right\rangle $ combination needs to be the inner product instead of $ \left\langle d \middle| a \right\rangle $. But if the two vectors are recorded as $ \left\lvert a, b \right\rangle $ and $ \left\lvert c, d \right\rangle $, then the inner product is recorded as
2
\begin{equation}
\left\langle c, d \middle| a, b \right\rangle = \left\langle c \middle| a \right\rangle \left\langle d \middle| b \right\rangle
\end{equation}
Whether the inner product in the tensor product space satisfies the commutative law depends on whether the inner products in the two low-dimensional spaces satisfy the commutative law. According to the general nature of the inner product $ \left\langle u \middle| v \right\rangle = \left\langle v \middle| u \right\rangle ^*$, so the tensor product space also has
\begin{equation}
\left\langle a, b \middle| c, d \right\rangle = \left\langle a \middle| c \right\rangle \left\langle b \middle| d \right\rangle
= \left\langle c \middle| a \right\rangle ^* \left\langle d \middle| b \right\rangle ^* = \left\langle c, d \middle| a, b \right\rangle ^*
\end{equation}
If the bases in the two low-dimensional spaces are orthogonal and normalized, then the bases in the tensor product space are also orthogonal and normalized.
\begin{equation}
\left\langle x_{i'}, y_{j'} \middle| x_i, y_j \right\rangle = \left\langle x_{i'} \middle| x_i \right\rangle \left\langle y_{j'} \middle| y_j \right\rangle
= \delta_{i,i'}\delta_{j,j'}
\end{equation}
In the future, we will generally discuss the orthogonal normalized basis.
After knowing the inner product between the bases of the tensor product space, to calculate the inner product of any two vectors in the tensor product space, you only need to decompose them to the bases, and then according to the distribution law and orthogonality of the inner product You can get the familiar inner product formula by changing the conditions
\begin{equation} \begin{aligned}
\left\langle v' \middle| v \right\rangle &= \left(\sum_{i',j'} C'^*_{i',j'} \left\langle y_{j'} \right\rvert \left\langle x_{i'} \right\rvert \right) \left(\sum_{i,j} C_{i,j} \left\lvert x_{i'} \right\rangle \left\lvert y_{j'} \right\rangle \right) \\
&= \sum_{i',j'} \sum_{i,j} C'^*_{i',j'} C_{i,j} \delta_{i,i'}\delta_{j,j'}
= \sum_{i,j} C'^*_{i,j} C_{i,j}
\end{aligned} \end{equation}
Corresponding formula of completenessCan still be recorded as
\begin{equation}
\sum_{i,j} \left\lvert x_i, y_j \right\rangle \left\langle x_i, y_j \right\rvert = \boldsymbol{\mathbf{I}}
\end{equation}
Among them, $ \boldsymbol{\mathbf{I}} $ is the unit operator of the tensor product space.
We now look at the two-dimensional Fourier series from the perspective of the tensor product space, and record the expanded function as a vector $ \left\lvert f \right\rangle $, then
\begin{equation}
\left\lvert f \right\rangle = \sum_\alpha C_\alpha \left\lvert x, y \right\rangle _\alpha = \sum_{i,j} C_{i,j} \left\lvert x_i, y_j \right\rangle
\end{equation}
among them
\begin{equation}
C_{i,j} = \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} f_{x,i}(x) f_{y,j}(y) f(x, y) \,\mathrm{d}{x} \,\mathrm{d}{y}
= \left\langle x_i, y_j \middle| f \right\rangle
\end{equation}
We can verify the completeness of the base (
eq. 13 ), and substitute
eq. 15 into
eq. 14 to get
\begin{equation}
\left\lvert f \right\rangle = \sum_{i,j} \left\lvert x_i, y_j \right\rangle \left\langle x_i, y_i \middle| f \right\rangle
= \left(\sum_{i,j} \left\lvert x_i, y_j \right\rangle \left\langle x_i, y_i \right\rvert \right) \left\lvert f \right\rangle
\end{equation}
Is true for all $ \left\lvert f \right\rangle $, which indirectly proves
\begin{equation}
\sum_\alpha \left\lvert x_i, y_j \right\rangle \left\langle x_i, y_i \right\rvert = \boldsymbol{\mathbf{I}}
\end{equation}
If a vector in the tensor product space can be expressed as a single tensor product $ \left\lvert u \right\rangle \left\lvert v \right\rangle $, then its basis is expanded to
\begin{equation}
\left\lvert u \right\rangle \left\lvert v \right\rangle = \left(\sum_i a_i \left\lvert u_i \right\rangle \right) \otimes \left(\sum_j b_j \left\lvert u_j \right\rangle \right)
= \sum_{i,j} a_i b_j \left\lvert u_i, u_j \right\rangle
\end{equation}
Subspace of tensor product space
Let $ \left\lvert u_i \right\rangle $ and $ \left\lvert v_j \right\rangle $ be the basis of two low-dimensional spaces respectively, then there are two “natural” subspaces in the tensor product space.
A kind of tensor product base is divided into many groups according to the value of $i$, and each group is divided into a subspace. In the same way, the subspace can also be divided according to $j$. Let us call them $u$ subspace and $v$ subspace respectively.
A vector in any tensor space can be regarded as a linear combination of a vector in each subspace
\begin{equation}
\begin{aligned}
\sum_{i,j} C_{ij} \left\lvert u_i \right\rangle \left\lvert v_j \right\rangle &= \sum_j \left(\sum_i C_{ij} \left\lvert u_i \right\rangle \right) \otimes \left\lvert v_j \right\rangle = \sum_j \left\lvert a_j \right\rangle \left\lvert v_j \right\rangle \\
&= \sum_i \left\lvert u_i \right\rangle \otimes \left(\sum_j C_{ij} \left\lvert v_j \right\rangle \right) = \sum_i \left\lvert u_i \right\rangle \left\lvert b_i \right\rangle
\end{aligned} \end{equation}
The $ \left\lvert a_j \right\rangle $ in the above formula can be understood as the
component of the vector in each $ \left\lvert v_j \right\rangle $ subspace (a vector in the $u$ space), and $ \left\lvert b_i \right\rangle $ can be understood as the
component of the vector in each $ \left\lvert u_i \right\rangle $ subspace (The vector of $v$ space).
operator
Two linear operators in low-dimensional space $ \hat{A} $ and $ \hat{B} $ do tensor product to get operator $ \hat{A} \otimes \hat{B} $ in tensor product space. We define $ \hat{A} \otimes \hat{B} $ as a linear operator, and the result of this operator on any basis is
\begin{equation}
\hat{A} \otimes \hat{B} \left\lvert u_i, v_j \right\rangle = ( \hat{A} \left\lvert u_i \right\rangle ) \otimes ( \hat{B} \left\lvert v_j \right\rangle )
\end{equation}
To act on any vector, just record the vector as a linear combination of the tensor product space base, and then act on the tensor product base separately.
In particular, you can use the $ \hat{A} \otimes \hat{I} $ operation to extend the $ \hat{A} $ in the $\{ \left\lvert u_i \right\rangle \}$ space to the tensor product space.
\begin{equation}
\hat{A} \otimes \hat{I} \left\lvert u_i, v_j \right\rangle = ( \hat{A} \left\lvert u_i \right\rangle ) \otimes \left\lvert v_j \right\rangle
\end{equation}
This shows that the operator $ \hat{A} \otimes \hat{I} $ is closed in every $v$ subspace. Therefore, the operator $ \hat{A} \otimes \hat{I} $ acts on any vector in the tensor product space, which is equivalent to $ \hat{A} \otimes \hat{I} $ acting on the components of the vector in each subspace separately. The same goes for $ \hat{B} $, so I won’t repeat it. By definition, it is not difficult to prove
\begin{equation}
( \hat{A} \otimes \hat{I} )( \hat{I} \otimes \hat{B} ) = ( \hat{I} \otimes \hat{B} )( \hat{A} \otimes \hat{I} ) = \hat{A} \otimes \hat{B}
\end{equation}
Like the vector space, not any operator in the linear operator space can be expressed as the tensor product of two operators.
Two-dimensional matrix
If the basis in the tensor product space is orthogonal and normalized, then the matrix element of the tensor product operator is
\begin{equation} \begin{aligned}
( \hat{A} \otimes \hat{B} )_{\alpha,\alpha'} &= ( \hat{A} \otimes \hat{B} )_{i,j,i',j'} = \left\langle u_{i} v_{j} \right\rvert \hat{A} \otimes \hat{B} \left\lvert u_{i'} v_{j'} \right\rangle \\
&= \left\langle u_{i} v_{j} \right\rvert ( \hat{A} \left\lvert u_{i'} \right\rangle \otimes \hat{B} \left\lvert v_{j'} \right\rangle )
= \left\langle u_i \right\rvert \hat{A} \left\lvert u_{i'} \right\rangle \left\langle v_j \right\rvert \hat{B} \left\lvert v_{j'} \right\rangle
\end{aligned} \end{equation}
Note that the position of the matrix element is related to the ordering of the bases. If sorted according to eq. 4 , the bases of each $u$ subspace are all together. Now using the concept of block matrix, if the vector is divided into segments according to the $u$ subspace, each segment is the coefficient of the component in the $u$ subspace, the matrix will also be divided into some square blocks, the first $(m, n)$ block is
\begin{equation}
A_{mn} \boldsymbol{\mathbf{B}}
\end{equation}
Therefore, the matrix of $ \hat{A} \otimes \hat{B} $ expands each matrix element $A_{mn}$ of $ \boldsymbol{\mathbf{A}} $ into a matrix block $A_{mn} \boldsymbol{\mathbf{B}} $.
Similarly, if the basis is sorted by eq. 5 , that is, the basis of each $v$ subspace is all together, then the matrix of $ \hat{A} \otimes \hat{B} $ is to expand each matrix element $B_{mn}$ of $ \boldsymbol{\mathbf{B}} $ into matrix blocks $B_{mn} \boldsymbol{\mathbf{A}} $.
Any linear operator can be expressed as a linear combination of operator tensors, so general operators can still be expressed as a block matrix, but each block may not be proportional.
Four-dimensional tensor
The method of ordering the direct product space bases loses the symmetry of the two space bases. We can write the coordinates $C_{i,j}$ of the vector $ \left\lvert c \right\rangle $ in the tensor product space in the form of a matrix (note that conceptually this is still a vector and does not represent a linear mapping). The $i$ row of the matrix is the coordinates of the $ \left\lvert u_i \right\rangle $ subspace component, and the $j$ column is the coordinate of the $ \left\lvert v_j \right\rangle $ subspace component.
The effect of $ \hat{A} \otimes \hat{B} $ on the matrix is equivalent to using matrix $ \boldsymbol{\mathbf{A}} $ to process each column (row transformation of matrix $ \boldsymbol{\mathbf{C}} $), and then using matrix $ \boldsymbol{\mathbf{B}} $ to process each row (column transformation of matrix $ \boldsymbol{\mathbf{C}} $), or the first column transformation Change again.
\begin{equation}
\sum_{j'} B_{jj'} \left(\sum_{i'} A_{ii'} C_{i'j'} \right)
= \sum_{i'} A_{ii'} \left(\sum_{j'}B_{jj'} C_{i'j'} \right)
\end{equation}
If $ \boldsymbol{\mathbf{A}} $ or $ \boldsymbol{\mathbf{B}} $ are diagonal, then you only need to multiply the coefficients of each row or column of the matrix by the diagonal element, and then use the matrix $ \boldsymbol{\mathbf{B}} $ or $ \boldsymbol{\mathbf{A}} $ respectively. This is equivalent to acting independently on the components in each subspace.
We can use a four-dimensional tensor to represent this operator, let the operator $T = A\otimes B$, then $T_{i,j,i'j'} = A_{i,i'} B_{j, j'}$, then the multiplication of the operator and the vector is
\begin{equation}
(T \left\lvert c \right\rangle )_{i,j} = \sum_{i',j'} T_{i,j,i'j'} C_{i',j'}
\end{equation}
General linear operators can also be represented by four-dimensional tensors.
Multiple tensor product
For example, the tensor product of the vectors $ \left\lvert b \right\rangle $, $ \left\lvert b \right\rangle $, and $ \left\lvert c \right\rangle $ in three spaces is $ \left\lvert a \right\rangle \left\lvert b \right\rangle \left\lvert c \right\rangle $, which can be expressed as a three-dimensional matrix. The operator can be expressed as a $2\times3 = 6$ dimensional tensor. The operator acting on the vector is
\begin{equation}
y_{i,j,k} = \sum_{i',j',k'} Q_{i,j,k,i',j',k'} x_{i',j',k'}
\end{equation}
Other operations can also be compared.
Partial inner product
3 If we define the partial inner product of the tensor product of two vectors and the vector in the $u$ space as
\begin{equation}
\left\langle u_1 \right\rvert ( \left\lvert u \right\rangle \left\lvert v \right\rangle ) = \left\langle u_1 \middle| u \right\rangle \left\lvert v \right\rangle
\end{equation}
And the operation is linear, then the operation of multiplying any vector in $u$ space by any vector in tensor product space is
\begin{equation}
\begin{aligned}
\left(\sum_k x_k \left\langle u_k \right\rvert \right) \left(\sum_{i,j} C_{ij} \left\lvert u_i \right\rangle \left\lvert v_j \right\rangle \right)
&= \sum_{i,j} C_{ij} \left(\sum_k x_k \left\langle u_k \middle| u_i \right\rangle \right) \left\lvert v_j \right\rangle \\
&=\sum_j \left(\sum_i x_i C_{ij} \right) \left\lvert v_j \right\rangle
\end{aligned} \end{equation}
That is to do the inner product of the components in each $ \left\lvert v_j \right\rangle $ subspace to get the vector of the $v$ space.
In the same way, you can also define the partial inner product of the $v$ space
\begin{equation}
\left\langle v_1 \right\rvert ( \left\lvert u \right\rangle \left\lvert v \right\rangle ) = ( \left\langle v_1 \middle| v \right\rangle ) \left\lvert u \right\rangle
\end{equation}
Then any left vector $ \left\langle v_1 \right\rvert $ in the $v$ space multiplied by the vector of any tensor product space is equal to the inner product of each $ \left\lvert u_j \right\rangle $ subspace with the $ \left\langle v_1 \right\rvert $ inner product, to get the $u$ space vector.
Partial matrix element
First define the operator's part of the operation as $( \hat{A} \otimes \hat{B} ) \left\lvert b_j \right\rangle = ( \hat{A} \left\lvert \cdot \right\rangle ) \otimes ( \hat{B} \left\lvert b_j \right\rangle )$, which is a $a$ space operator that acts on a certain $a$ space vector and then performs tensor product operations with $ \left\lvert b \right\rangle $.
Define partial matrix elements such as $ \left\langle b_i \middle| \hat{A} \otimes \hat{B} \middle| b_j \right\rangle = \left\langle b_i \middle| \hat{B} \middle| b_j \right\rangle \hat{A} $ as an operator in the space of $a$. Therefore, if $ \hat{A} \otimes \hat{B} $ acts on any vector, it is equivalent to applying $ \hat{A} $ on the components of each $ \left\lvert b_j \right\rangle $ subspace first, and then linearly transform these subspaces according to the matrix $ \left\langle b_i \middle| \hat{B} \middle| b_j \right\rangle $. In particular, if $ \left\langle b_i \middle| \hat{B} \middle| b_j \right\rangle $ is a diagonal matrix, then you only need to apply $ \hat{A} $ to the components of each $ \left\lvert b_j \right\rangle $ subspace and then multiply the diagonal matrix element $ \left\langle b_j \middle| \hat{B} \middle| b_j \right\rangle $.
The discussion of $ \left\langle a_i \middle| \hat{A} \otimes \hat{B} \middle| a_j \right\rangle = \left\langle a_i \middle| \hat{A} \middle| a_j \right\rangle \hat{B} $ is the same.
Other notes
Theorem on eigenproblems
- If we consider the eigenproblem in tensor product space, the eigenvector $ \left\lvert eig_i \right\rangle $ of $ \hat{A} $ has $n$ re-degenerate, and the basis of the degenerate space is $ \left\lvert eig_i \right\rangle \left\lvert v_1 \right\rangle , \left\lvert eig_i \right\rangle \left\lvert v_2 \right\rangle \dots$.
There are a total of $m \times n$ eigenvalues of
- $ \hat{A} \otimes \hat{B} $. $m$ and $n$ are the dimensions of $A$ and $B$, respectively. If $a_1, a_2,\dots, a_m$ and $b_1, b_2, \dots, b_n$ are the eigenvalues of $A$ and $B$, then $A \otimes B$ The eigenvalue and eigenvector are $a_i b_j$ and $ \left\lvert u_i, v_j \right\rangle $ ($0 \leqslant i \leqslant m, 0 \leqslant j \leqslant n$) respectively.
The eigenvalue and eigenvector of
- $A \otimes I + I \otimes B$ are $a_i + b_j$ and $ \left\lvert u_i, v_j \right\rangle $ respectively.
- The tensor product of two Hermitian matrices is still a Hermitian matrix.
- The addition of two Hermitian matrices is still a Hermitian matrix.
Expansion of Operators
In addition, the operators in quantum mechanics (that I have seen) can be expressed as a linear combination of the tensor product of operators in two small spaces
\begin{equation}
\Omega = \sum_k A_k \otimes B_k
\end{equation}
For example, the single electron Hamiltonian in the central force field
\begin{equation}
H = K_r + \frac{L^2}{2mr^2} + V(r) - q \boldsymbol{\mathbf{\mathcal{E}}} \boldsymbol\cdot \boldsymbol{\mathbf{r}}
\end{equation}
Where $K_r, V(r)$ is the tensor product of the operator in the $R$ space and the unit operator in the $Y$ space, and the $L^2$ operator (
eq. 4 ) is the unit operator in the $R$ space and The tensor product of operators in $Y$ space. And the last term has no differential operator, just a function. This number of rows can also be expanded into a linear combination of the basis in the tensor product space, that is, the linear combination of the tensor product of operators in two spaces.
A simple method of proof is, for example, the direct product of the matrix of 2 by 2 $ \boldsymbol{\mathbf{A}} $ and the matrix of 2 by 2 of $ \boldsymbol{\mathbf{B}} $, which is to multiply each matrix element of $ \boldsymbol{\mathbf{A}} $ by $ \boldsymbol{\mathbf{B}} $ to become a 4 by 4 matrix. To decompose any 4 by 4 matrix, we only need to take $ \boldsymbol{\mathbf{A}} _i$ as a set of matrix bases $[1, 0; 0, 0]$, $[0, 1; 0, 0]$, $[0, 0; 1, 0]$, $[0, 0; 0, 1]$, and then let $ \boldsymbol{\mathbf{B}} _i$ be each sub-matrix.
Single space base expansion
Any vector in the tensor product space can be expanded only on the basis of a small space. For example, the wave function can be regarded as the tensor product of the radial wave function space and the angular wave function space. The wave function can be expanded only on the basis of the angular wave function space-the spherical harmonic function
\begin{equation}
\left\lvert \Psi \right\rangle = \sum_j \left\lvert R_j \right\rangle \left\lvert Y_j \right\rangle
\end{equation}
Among them, $j$ represents some sort of combination of $l, m$.
If we use the base in one small space, but not in another space. For example, in the $Y$ space, we use the spherical harmonic function as the basis to obtain an "operator matrix", that is, each matrix element is an operator in the R space. For example, the Schrodinger equation
\begin{equation}
H \left\lvert \Psi \right\rangle = \mathrm{i} \frac{\partial}{\partial{t}} \left\lvert \Psi \right\rangle
\end{equation}
After using the base of $Y$ space, it can be expressed as
\begin{equation}
H \sum_j \left\lvert R_j \right\rangle \left\lvert Y_j \right\rangle = \mathrm{i} \sum_j \frac{\partial}{\partial{t}} \left\lvert R_j \right\rangle \left\lvert Y_j \right\rangle
\end{equation}
Multiplying $ \left\langle Y_i \right\rvert $ to the left becomes
\begin{equation}
\sum_j \left\langle Y_i \middle| H \middle| Y_j \right\rangle \left\lvert R_j \right\rangle = \mathrm{i} \frac{\partial}{\partial{t}} \left\lvert R_i \right\rangle
\quad\Rightarrow\quad
\sum_j H_{ij} \left\lvert R_j \right\rangle = \mathrm{i} \frac{\partial}{\partial{t}} \left\lvert R_i \right\rangle
\end{equation}
Each $H_{ij}$ is an operator in the $R$ space.
From here we conclude that the vector in the tensor product space can be expanded into a tensor product of $N$ vectors in a small space and $N$ basis in another small space. The so-called "coordinates" are the "column vectors" formed by the $N$ vectors in the first small space. A vector in a tensor product space can be multiplied by a vector in a small space to get a vector in another small space. The operator in the tensor product space is multiplied by left and right by the orthogonal normalized basis in a small space to obtain the operator in another small space.
Another example is the multi-channel scattering problem (see hourly physics notes). That is, the total wave function is regarded as the tensor product of the space of a variable $R$ (for example, the distance between a certain electron and the center of mass) and the space of all remaining degrees of freedom. Each channel corresponds to a basis in the space of all remaining free sums, and the coefficient is a radial wave function $\psi_i$ in the space of $R$. If there are $N$ channels, you can get $N$ coupled equations of $\psi_i$, which are actually Schrödinger equations in matrix form.
Also, refer to the tensor product page on wikipedia.
1. ^ Because there are two corner marks for each base here, beginners have the urge to arrange the bases into a rectangle, which is wrong.
2. ^ . In writing habits, the signs in ket and corresponding bra should remain the same. So the Hermitian conjugate of $ \left\lvert c, d \right\rangle $ is recorded as $ \left\langle c, d \right\rvert $ instead of $ \left\langle d, c \right\rvert $.
3. ^ This operation is not a standard operation but a definition in this book