文章快速检索     高级检索
  浙江大学学报(理学版)  2017, Vol. 44 Issue (5): 511-515  DOI:10.3785/j.issn.1008-9497.2017.05.002
0

citing the article as [复制中英文]

ZHANG Huamin, YIN Hongcai. An eigenvalue inequality of a class of matrices and its applications in proving the Fischer inequality[J]. Journal of Zhejiang University(Science Edition), 2017, 44(5): 511-515. DOI: 10.3785/j.issn.1008-9497.2017.05.002.
[复制英文]
张华民, 殷红彩. 一类矩阵特征值的不等式及其在Fischer不等式证明中的应用[J]. 浙江大学学报(理学版), 2017, 44(5): 511-515. DOI: 10.3785/j.issn.1008-9497.2017.05.002.
[复制中文]

Fundation item

Supported by Natural Science Foundation of Anhui Provincial Education Department (KJ2016A458) and Excellent Personnel Domestic Visiting Project (gxfxZD2016274)

About the author

ZHANG Huamin(1972-), ORCID:http://orcid.org/0000-0002-7416-7415, male, doctor, associate professor, the field of interest are matrix theory and its applications, E-mail:zhangeasymail@126.com

Article History

Received Date: February 4, 2016
An eigenvalue inequality of a class of matrices and its applications in proving the Fischer inequality
ZHANG Huamin1 , YIN Hongcai2     
1. Department of Mathematics & Physics, Bengbu University, Bengbu 233030, Anhui Province, China;
2. School of Management Science and Engineering, Anhui University of Finance & Economics, Bengbu 233000, Anhui Province, China
Received Date: February 4, 2016
Fundation item: Supported by Natural Science Foundation of Anhui Provincial Education Department (KJ2016A458) and Excellent Personnel Domestic Visiting Project (gxfxZD2016274)
About the author: ZHANG Huamin(1972-), ORCID:http://orcid.org/0000-0002-7416-7415, male, doctor, associate professor, the field of interest are matrix theory and its applications, E-mail:zhangeasymail@126.com
Abstract: The Hadamard inequality and Fischer inequality play an important role in the matrix study. Many articles have addressed these inequalities providing new proofs, noteworthy extensions, generalizations, refinements, counterparts and applications. This paper discusses the eigenvalues of a class of matrices related to the real symmetric positive definite matrix and establishes an inequality of the eigenvalues. By using this inequality, the Fischer determinant inequality and Hadamard determinant inequality are proved.
Key words: positive definite matrix    eigenvalue    eigenvector    determinant inequality    
一类矩阵特征值的不等式及其在Fischer不等式证明中的应用
张华民1, 殷红彩2    
1. 蚌埠学院 数理系, 安徽 蚌埠 233030;
2. 安徽财经大学 管理科学和工程学院, 安徽 蚌埠 233000
摘要: Hadamard和Fischer不等式在矩阵研究中起重要作用.已有大量文献研究此两不等式的新证明、推广、细化及应用.本文研究了和实对称正定矩阵相关的一类矩阵的特征值,并建立了关于这类矩阵特征值乘积范围的一个不等式,利用此不等式证明了行列式的Fischer和Hadamard不等式.
关键词: 正定矩阵    特征值    特征向量    行列式不等式    
1 Introduction and preliminaries

Inequality is an active research topic in recent years, the classical convexity has been generalized and extended in a diverse manner.One of them is the pre-invexity, introduced by WEIR et al[1]as a significant generalization of convex functions.Many researchers have studied the basic properties of the pre-invex functions and their role in optimization, variational inequalities and equilibrium problems[2-4].

Hadamard and Fischer inequalities are prima-ry inequalities for the real symmetric positive def-inite matrix, and there are many inequalities can be proved by using these two inequalities.There are many methods to prove these inequalities[5-7].Some results have been established inspired by the Hadamard inequality[8-9].

The real symmetric positive definite matrix has many properties and has been used in many ar-eas[10-12].Some properties can be used to prove the Hadamard inequality.In this note, inspired by the results established in [13-14], a new eigenvalue inequality related to the real symmetric positive definite matrix is proposed, and the Hadamard and Fischer inequality are proved by using this new inequality.

Firstly, let us introduce some notations and lemmas.In is the identity matrix with order n×n.For a square matrix A, we use λ[A], det(A) and AT represent the set of the eigenvalues, the deter-minant and the transpose of A, respectively.

Next, we introduce two lemmas.The following result about the block matrix determinant is well known[10].

Lemma 1   If matrix A is invertible, then for any block matrix, we have

$ \det \left( {\left[ {\begin{array}{*{20}{c}} \mathit{\boldsymbol{A}}&\mathit{\boldsymbol{B}}\\ \mathit{\boldsymbol{C}}&\mathit{\boldsymbol{D}} \end{array}} \right]} \right) = \det \left( \mathit{\boldsymbol{A}} \right)\det \left( {\mathit{\boldsymbol{D}} - \mathit{\boldsymbol{C}}{\mathit{\boldsymbol{A}}^{ - 1}}\mathit{\boldsymbol{B}}} \right), $ (1)

or if marix D is invertible, then we have

$ \det \left( {\left[ {\begin{array}{*{20}{c}} \mathit{\boldsymbol{A}}&\mathit{\boldsymbol{B}}\\ \mathit{\boldsymbol{C}}&\mathit{\boldsymbol{D}} \end{array}} \right]} \right) = \det \left( \mathit{\boldsymbol{D}} \right)\det \left( {\mathit{\boldsymbol{A}} - \mathit{\boldsymbol{B}}{\mathit{\boldsymbol{D}}^{ - 1}}\mathit{\boldsymbol{C}}} \right). $ (2)

Lemma 2   If ARm×n is a full column-rank matrix, the A(ATA)-1AT is idempotent and the eigenvalues of A(ATA)-1AT are 1 or 0, there exists an orthogonal matrix Q such that

$ \mathit{\boldsymbol{Q}}\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^{ - 1}}{\mathit{\boldsymbol{A}}^{\rm{T}}}} \right]\mathit{\boldsymbol{Q}} = {\rm{diag}}\left[ {1, \cdots ,1,0, \cdots ,0} \right] = :\Lambda . $

Furthermore, we have rank [A]=n.

This lemma was suggested in [14], for convenience, we give the proof here.

Proof   If σλ[A(ATA)-1AT], then there exists a nonzero vector xRm, satisfying

$ \mathit{\boldsymbol{A}}{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)^1}{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{x = }}\sigma \mathit{\boldsymbol{x}}. $

Thus, we have

$ \begin{array}{*{20}{c}} {{{\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^{ - 1}}{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{x}}} \right]}^{\rm{T}}}\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^{ - 1}}{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{x}}} \right] = {{\left( {\sigma \mathit{\boldsymbol{x}}} \right)}^{\rm{T}}}\left( {\sigma \mathit{\boldsymbol{x}}} \right),}\\ {{\mathit{\boldsymbol{x}}^{\rm{T}}}\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^{ - 1}}{\mathit{\boldsymbol{A}}^{\rm{T}}}} \right]\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^1}{\mathit{\boldsymbol{A}}^{\rm{T}}}} \right]\mathit{\boldsymbol{x = }}{\sigma ^2}{{\left\| \mathit{\boldsymbol{x}} \right\|}^2},}\\ {{\mathit{\boldsymbol{x}}^{\rm{T}}}\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^{ - 1}}{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^1}{\mathit{\boldsymbol{A}}^{\rm{T}}}} \right]\mathit{\boldsymbol{x = }}{\sigma ^2}{{\left\| \mathit{\boldsymbol{x}} \right\|}^2},}\\ {{\mathit{\boldsymbol{x}}^{\rm{T}}}\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^1}{\mathit{\boldsymbol{A}}^{\rm{T}}}} \right]\mathit{\boldsymbol{x = }}{\sigma ^{ - 2}}{{\left\| \mathit{\boldsymbol{x}} \right\|}^2},}\\ {{\mathit{\boldsymbol{x}}^{\rm{T}}}\sigma \mathit{\boldsymbol{x}}{ = ^2}{{\left\| \mathit{\boldsymbol{x}} \right\|}^2},}\\ {\sigma {{\left\| \mathit{\boldsymbol{x}} \right\|}^2} = {\sigma ^2}{{\left\| \mathit{\boldsymbol{x}} \right\|}^2}.} \end{array} $

Since‖x2≠0, A(ATA)-1AT has eigenvalues σ=0 or σ=1.Because of the symmetry of A(ATA)-1AT, there exists a real orthogonal matrix Q:=[q1, q2, …, qm]∈Rm×m such that

$ {\mathit{\boldsymbol{Q}}^{\rm{T}}}\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^{ - 1}}{\mathit{\boldsymbol{A}}^{\rm{T}}}} \right]\mathit{\boldsymbol{Q}} = {\rm{diag}}\left[ {1, \cdots ,1,0, \cdots ,0} \right] = :\Lambda . $

On the other hand, since (ATA)-1AT is the left pseudo-inverse of A, we have

$ \begin{array}{*{20}{c}} {{\rm{rank}}\left[ \mathit{\boldsymbol{A}} \right] = {\rm{rank}}\left[ {{\mathit{\boldsymbol{Q}}^{\rm{T}}}\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^{ - 1}}{\mathit{\boldsymbol{A}}^{\rm{T}}}} \right]\mathit{\boldsymbol{Q}}} \right] = }\\ {{\rm{rank}}\left[ {\mathit{\boldsymbol{A}}{{\left( {{\mathit{\boldsymbol{A}}^{\rm{T}}}\mathit{\boldsymbol{A}}} \right)}^{ - 1}}{\mathit{\boldsymbol{A}}^{\rm{T}}}} \right] = {\rm{rank}}\left[ \mathit{\boldsymbol{A}} \right] = n.} \end{array} $

This proves lemma 2.

2 An inequality of a class of matrices

In this section, we will establish a new property about the eigenvalues related to the symmetric positive definite matrix.If ARn×n is a symmetric positive definite matrix, then there exists an invertible matrix B such that A=BBT.Suppose that B can be expressed as a block matrix

$ \mathit{\boldsymbol{B}}: = \left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{B}}_1}}\\ {{\mathit{\boldsymbol{B}}_2}} \end{array}} \right]: = \left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{B}}_{11}}}&{{\mathit{\boldsymbol{B}}_{12}}}\\ {{\mathit{\boldsymbol{B}}_{21}}}&{{\mathit{\boldsymbol{B}}_{22}}} \end{array}} \right], $

where B11Rm×m, B12Rm×p, B21Rp×m, and B22Rp×p.Using these symbols, the matrix A can be expressed as

$ \mathit{\boldsymbol{B}}{\mathit{\boldsymbol{B}}^{\rm{T}}} = \left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{B}}_1}}\\ {{\mathit{\boldsymbol{B}}_2}} \end{array}} \right]\left[ {\mathit{\boldsymbol{B}}_1^{\rm{T}},\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right] = \left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{B}}_1}\;\mathit{\boldsymbol{B}}_1^{\rm{T}}\;{\mathit{\boldsymbol{B}}_1}\;\mathit{\boldsymbol{B}}_2^{\rm{T}}}\\ {{\mathit{\boldsymbol{B}}_2}\;\mathit{\boldsymbol{B}}_1^{\rm{T}}\;{\mathit{\boldsymbol{B}}_2}\;\mathit{\boldsymbol{B}}_2^{\rm{T}}} \end{array}} \right] = :\left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{A}}_{11}}\;{\mathit{\boldsymbol{A}}_{12}}}\\ {{\mathit{\boldsymbol{A}}_{21}}\;{\mathit{\boldsymbol{A}}_{22}}} \end{array}} \right]. $

Set

$ \mathit{\boldsymbol{N}}: = \left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}}&0\\ 0&{{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \end{array}} \right] \in {{\bf{R}}^{m \times p}}. $

With these symbols, the following result holds.

Theorem 1   If the eigenvalues of the matrix N-1A are δ1, δ2, …, δn, then 0 < δ1δ2δn≤1.

Proof   Let f(λ): =det(λIn-N-1A)be the characteristic polynomial of matrix N-1A, we have

$ \begin{array}{l} f\left( \lambda \right) = \\ \det \left( {\lambda {\mathit{\boldsymbol{I}}_n} - \left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{I}}_m}}&{{{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}\\ {{{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}&{{\mathit{\boldsymbol{I}}_p}} \end{array}} \right]} \right). \end{array} $ (3)

We verify that 2 not belongs to the eigenvalues of the matrix N-1 A.If 2 is the eigenvalue of the matrix N-1A, then

$ \begin{array}{l} f\left( 2 \right) = \\ \det \left( {\left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{I}}_m}}&{ - {{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}\\ { - {{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}&{{\mathit{\boldsymbol{I}}_p}} \end{array}} \right]} \right) = 0. \end{array} $

On the other hand,

$ \begin{array}{l} f\left( 2 \right) = \\ \det \left( {\left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{I}}_m}}&{ - {{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}\\ { - {{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}&{{\mathit{\boldsymbol{I}}_p}} \end{array}} \right]} \right) = \\ \det \left( {\left[ {\begin{array}{*{20}{c}} {{{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}}&{\bf{0}}\\ {\bf{0}}&{{{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}^{ - 1}}} \end{array}} \right] \times \left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}}&{ - {\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_2^{\rm{T}}}\\ { - {\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}}&{{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \end{array}} \right]} \right) = \\ \det \left( {\left[ {\begin{array}{*{20}{c}} {{{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}}&{\bf{0}}\\ {\bf{0}}&{{{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}^{ - 1}}} \end{array}} \right]} \right) \times \\ \det \left( {\left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{B}}_1}}\\ { - {\mathit{\boldsymbol{B}}_2}} \end{array}} \right]\left[ {\mathit{\boldsymbol{B}}_1^{\rm{T}} - \mathit{\boldsymbol{B}}_2^{\rm{T}}} \right]} \right) \ne 0. \end{array} $

This is a contradiction, so 2 is not a eigenvalue of the matrix N-1A.

According to lemma 1, suppose that mp, equation (3) can be manipulated as

$ \begin{array}{l} f\left( \lambda \right) = \det \left( {\left[ {\begin{array}{*{20}{c}} {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_m}}&{ - {{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}\\ { - {{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}&{\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_p}} \end{array}} \right]} \right) = \\ \;\;\;\;\;\;\;\;\;\det \left( {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_m}} \right)\det \left( {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_p} - {{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}^{ - 1}} \times } \right.\\ \;\;\;\;\;\;\;\;\;\left. {\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right){{\left( {\lambda - 1} \right)}^{ - 1}}{\mathit{\boldsymbol{I}}_m}{{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)} \right) = \\ \;\;\;\;\;\;\;\;\;\det \left( {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_m}} \right)\det \left( {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_p} - {{\left( {\lambda - 1} \right)}^{ - 1}}{\mathit{\boldsymbol{I}}_p} \times } \right.\\ \;\;\;\;\;\;\;\;\;\left. {{{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right){{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)} \right) = \\ \;\;\;\;\;\;\;\;\;{\left( {\lambda - 1} \right)^{m - p}}\det \left( {{{\left( {\lambda - 1} \right)}^2}{\mathit{\boldsymbol{I}}_p} - {{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)}^{ - 1}} \times } \right.\\ \;\;\;\;\;\;\;\;\;\left. {{\mathit{\boldsymbol{B}}_2}\left[ {\mathit{\boldsymbol{B}}_1^{\rm{T}}{{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}{\mathit{\boldsymbol{B}}_1}} \right]\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right). \end{array} $ (4)

Since B1T(B1B1T)-1B1 is a real symmetric matrix, according to lemma 2, there exists an orthogonal matrix Q: = [q1, q2, …, qn] such that

$ \begin{array}{l} \left[ {\mathit{\boldsymbol{B}}_1^{\rm{T}}{{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}{\mathit{\boldsymbol{B}}_1}} \right]\left[ {{\mathit{\boldsymbol{q}}_1},{\mathit{\boldsymbol{q}}_2}, \cdots ,{\mathit{\boldsymbol{q}}_n}} \right] = \\ \;\;\;\;\;\;\;\left[ {{\mathit{\boldsymbol{q}}_1},{\mathit{\boldsymbol{q}}_2}, \cdots ,{\mathit{\boldsymbol{q}}_n}} \right]{\rm{diag}}\left[ {1, \cdots ,1,0, \cdots ,0} \right] = \\ \;\;\;\;\;\;\;\left[ {{\mathit{\boldsymbol{q}}_1},{\mathit{\boldsymbol{q}}_2}, \cdots ,{\mathit{\boldsymbol{q}}_m},0,0, \cdots ,0} \right], \end{array} $ (5)

where m=rank[B1T(B1B1T)1B1].

Let C: =(B2B2T)-1B2[B1T(B1B1T)-1B1]B2TRp×p, and suppose that ρλ[C].There exists a nonzero vector yRp such that Cyy.Manipulating this equation gives

$ \left\{ \begin{array}{l} {\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)^{ - 1}}{\mathit{\boldsymbol{B}}_2}\left[ {\mathit{\boldsymbol{B}}_1^{\rm{T}}{{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}{\mathit{\boldsymbol{B}}_1}} \right]\mathit{\boldsymbol{B}}_2^{\rm{T}}y = \rho y,\\ {\mathit{\boldsymbol{B}}_2}\left[ {\mathit{\boldsymbol{B}}_1^{\rm{T}}{{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^{\rm{T}}} \right)}^{ - 1}}{\mathit{\boldsymbol{B}}_1}} \right]\mathit{\boldsymbol{B}}_2^{\rm{T}}y = \rho \left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^{\rm{T}}} \right)y. \end{array} \right. $ (6)

It is clear that B2T$\mathcal{y}$ can be expressed as a linear combination of q1, q2, …, qn, i.e.,

$ \mathit{\boldsymbol{B}}_2^{\rm{T}}y = {k_1}{\mathit{\boldsymbol{q}}_1} + {k_2}{\mathit{\boldsymbol{q}}_2} + \cdots + {k_n}{\mathit{\boldsymbol{q}}_n}, $ (7)

where kiR, i=1, 2, …, n and k12+k22+…+kn2≠0.Using equations(5) and(7), we have

$ \begin{array}{l} {\left( {\mathit{\boldsymbol{B}}_2^{\rm{T}}y} \right)^{\rm{T}}}\left( {\mathit{\boldsymbol{B}}_2^{\rm{T}}y} \right) = {\left( {{k_1}{\mathit{\boldsymbol{q}}_1} + {k_2}{\mathit{\boldsymbol{q}}_2} + \cdots + {k_n}{\mathit{\boldsymbol{q}}_n}} \right)^{\rm{T}}} \times \\ \;\;\;\;\;\;\;\left( {{k_1}{\mathit{\boldsymbol{q}}_2} + {k_2}{\mathit{\boldsymbol{q}}_2} + \cdots + {k_n}{\mathit{\boldsymbol{q}}_n}} \right) = \\ \;\;\;\;\;\;\;k_1^2 + k_2^2 + \cdots + k_n^2, \end{array} $ (8)
$ \begin{array}{l} \left[ {\mathit{\boldsymbol{B}}_1^T\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^T} \right)^{-1}{\mathit{\boldsymbol{B}}_1}} \right]\mathit{\boldsymbol{B}}_2^Ty = \left[ {\mathit{\boldsymbol{B}}_1^T\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^T} \right)^{-1}{\mathit{\boldsymbol{B}}_1}} \right] \times \\ \;\;\;\;\;\;\;\left( {{k_1}{\mathit{\boldsymbol{q}}_1} + {k_2}{\mathit{\boldsymbol{q}}_2} + \cdots + {k_n}{\mathit{\boldsymbol{q}}_n}} \right) = \\ \;\;\;\;\;\;\;{k_1}{\mathit{\boldsymbol{q}}_1} + {k_2}{\mathit{\boldsymbol{q}}_2} + \cdots + {k_m}{\mathit{\boldsymbol{q}}_m}, \end{array} $ (9)

The both side of equation (6) multiply by $\mathcal{y}$T gives

$ {y^{\rm{T}}}{\mathit{\boldsymbol{B}}_2}\left[ {\mathit{\boldsymbol{B}}_1^T\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^T} \right){\mathit{\boldsymbol{B}}_1}} \right]\mathit{\boldsymbol{B}}_2^Ty = {y^{\rm{T}}}{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^Ty. $ (10)

According to equations(7) and (9), the left-side of equation (10) can be rewritten as

$ \begin{array}{l} {\left( {{k_1}{\mathit{\boldsymbol{q}}_1} + {k_2}{\mathit{\boldsymbol{q}}_2} + \cdots + {k_n}{\mathit{\boldsymbol{q}}_n}} \right)^{\rm{T}}} \times \\ \;\;\;\;\;\;\;\;\;\left( {{k_1}{\mathit{\boldsymbol{q}}_1} + {k_2}{\mathit{\boldsymbol{q}}_2} + \cdots + {k_m}{\mathit{\boldsymbol{q}}_m}} \right) = \\ \;\;\;\;\;\;\;\;\;k_1^2 + k_2^2 + \cdots + k_m^2, \end{array} $ (11)

Combining equations(10)(8) and(11) gives

$ \left( {k_1^2 + k_2^2 + \cdots + k_m^2} \right) = \rho \left( {k_1^2 + k_2^2 + \cdots + k_n^2} \right). $

Hence, we have

$ 0 \le \rho = \frac{{k_1^2 + k_2^2 + \cdots + k_m^2}}{{k_1^2 + k_2^2 + \cdots + k_n^2}} \le 1. $ (12)

Since 0≤ρ≤1, we suppose that λ[C]={ρ12, ρ12, …, ρp2}with 0≤ρp≤…≤ρ1≤1.According to the Schur decomposition theorem, there exists an orthogonal matrix U such that

$ {\mathit{\boldsymbol{U}}^{\rm{T}}}\mathit{\boldsymbol{CU}} = {\rm{diag}}\left[ {\rho _1^2,\rho _2^2, \cdots ,\rho _p^2} \right] + \mathit{\boldsymbol{R}}, $ (13)

where R is strictly upper triangular.Substituting equation(13) into equation(4) and simplifying it, give

$ \begin{array}{l} f\left( \lambda \right) = {\left( {\lambda - 1} \right)^{m - p}}\det \left( {{{\left( {\lambda - 1} \right)}^2}{\mathit{\boldsymbol{I}}_p} - \mathit{\boldsymbol{C}}} \right) = \\ \;\;\;\;\;\;\;\;\;\;\;\;{\left( {\lambda - 1} \right)^{m - p}}\det \left( {{\mathit{\boldsymbol{U}}^{\rm{T}}}} \right)\det \left( {{{\left( {\lambda - 1} \right)}^2}\mathit{\boldsymbol{I}}p - \mathit{\boldsymbol{C}}} \right)\det \left( \mathit{\boldsymbol{U}} \right) = \\ \;\;\;\;\;\;\;\;\;\;\;\;{\left( {\lambda - 1} \right)^{m - p}}\det \left( {{\mathit{\boldsymbol{U}}^{\rm{T}}}{{\left( {\lambda - 1} \right)}^2}{\mathit{\boldsymbol{I}}_p}\mathit{\boldsymbol{U}} - {\mathit{\boldsymbol{U}}^{\rm{T}}}\mathit{\boldsymbol{CU}}} \right) = \\ \;\;\;\;\;\;\;\;\;\;\;\;{\left( {\lambda - 1} \right)^{m - p}}\det \left( {{{\left( {\lambda - 1} \right)}^2}{\mathit{\boldsymbol{I}}_p} - {\rm{diag}}\left[ {\rho _1^2,\rho _2^2, \cdots ,\rho _p^2} \right] - \mathit{\boldsymbol{R}}} \right) = \\ \;\;\;\;\;\;\;\;\;\;\;\;{\left( {\lambda - 1} \right)^{m - p}}\left[ {{{\left( {\lambda - 1} \right)}^2} - \rho _1^2} \right] \cdots \left[ {{{\left( {\lambda - 1} \right)}^2} - \rho _p^2} \right] = \\ \;\;\;\;\;\;\;\;\;\;\;\;{\left( {\lambda - 1} \right)^{m - p}}\left( {\lambda - 1 - {\rho _1}} \right)\left( {\lambda - 1 + {\rho _1}} \right) \times \cdots \times \\ \;\;\;\;\;\;\;\;\;\;\;\;\left( {\lambda - 1 - {\rho _p}} \right)\left( {\lambda - 1 + {\rho _p}} \right). \end{array} $

From this equation, we can see that the eigenvalues of N-1A are

$ 1 + {\rho _1},1 - {\rho _1}, \cdots ,1 + {\rho _p},1 - {\rho _p},1, \cdots ,1. $ (14)

From this equation, we have

$ \begin{array}{l} 0 \le {{\rm{\delta }}_1}{{\rm{\delta }}_2} \cdots {{\rm{\delta }}_n} = \\ \;\;\;\;\;\;\left( {1 + {\rho _1}} \right)\left( {1 - {\rho _1}} \right) \cdots \left( {1 + {\rho _p}} \right)\left( {1 - {\rho _p}} \right) = \\ \;\;\;\;\;\;\left( {1 - \rho _1^2} \right)\left( {1 - \rho _2^2} \right) \cdots \left( {1 - \rho _p^2} \right) \le 1. \end{array} $ (15)

Since 2 is not the eigenvalue of the matrix N-1A, an improvement of inequality (12) is 0≤ρ < 1.

Correspondingly, inequality (15) can be im-proved as

$ \begin{array}{l} 0 \le {{\rm{\delta }}_1}{{\rm{\delta }}_2} \cdots {{\rm{\delta }}_n} = \\ \;\;\;\;\;\;\left( {1 + {\rho _1}} \right)\left( {1 - {\rho _1}} \right) \cdots \left( {1 + {\rho _p}} \right)\left( {1 - {\rho _p}} \right) = \\ \;\;\;\;\;\;\left( {1 - \rho _1^2} \right)\left( {1 - \rho _2^2} \right) \cdots \left( {1 - \rho _p^2} \right) \le 1. \end{array} $

The proof is completed.

Remark 1   The above proof shows that the sup-position mp is not essential.In fact, if m < p, according to equation (2) of lemma 1, we have

$ \begin{array}{l} f\left( \lambda \right) = \\ \;\;\;\;\;\det \left( {\left[ {\begin{array}{*{20}{c}} {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_m}}&{ - {{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^T} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_2^T} \right)}\\ { - {{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^T} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_1^T} \right)}&{\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_p}} \end{array}} \right]} \right) = \\ \;\;\;\;\;\det \left( {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_m}} \right)\det \left( {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_m} - {{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_1^T} \right)}^{ - 1}} \times } \right.\\ \;\;\;\;\;\left. {\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_2^T} \right){{\left( {\lambda - 1} \right)}^{ - 1}}{\mathit{\boldsymbol{I}}_{\rm{p}}}{{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^T} \right)}^{ - 1}}\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_1^T} \right)} \right) = \\ \;\;\;\;\;\det \left( {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_p}} \right)\det \left( {\left( {\lambda - 1} \right){\mathit{\boldsymbol{I}}_m} - {{\left( {\lambda - 1} \right)}^{ - 1}} \times } \right.\\ \;\;\;\;\;\left. {{\mathit{\boldsymbol{I}}_m}{{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^T} \right)}^{ - 1}}{\mathit{\boldsymbol{B}}_1}\left[ {\mathit{\boldsymbol{B}}_2^T{{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^T} \right)}^{ - 1}}{\mathit{\boldsymbol{B}}_2}} \right]\mathit{\boldsymbol{B}}_1^T} \right) = \\ \;\;\;\;\;{\left( {\lambda - 1} \right)^{p - m}}\det \left( {{{\left( {\lambda - 1} \right)}^2}{\mathit{\boldsymbol{I}}_m} - {{\left( {{\mathit{\boldsymbol{B}}_1}\mathit{\boldsymbol{B}}_1^T} \right)}^{ - 1}}{\mathit{\boldsymbol{B}}_1} \times } \right.\\ \;\;\;\;\;\left. {\left[ {\mathit{\boldsymbol{B}}_2^T{{\left( {{\mathit{\boldsymbol{B}}_2}\mathit{\boldsymbol{B}}_2^T} \right)}^{ - 1}}{\mathit{\boldsymbol{B}}_2}} \right]\mathit{\boldsymbol{B}}_1^T} \right). \end{array} $

This manipulation does not change the subsequent proof.

3 New proof of the Fischer inequality

In this section, we will use the results in theo-rem 1 to prove two determinant inequalities related to the symmetric positive definite matrix, that is, the Fischer inequality and the Hadamard inequali-ty.

Theorem 2   Considering the following symmetric positive definite block matrix

$ \mathit{\boldsymbol{M}}: = \left( {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{M}}_{11}}}&{{\mathit{\boldsymbol{M}}_{12}}}& \cdots &{{\mathit{\boldsymbol{M}}_{1k}}}\\ {{\mathit{\boldsymbol{M}}_{21}}}&{{\mathit{\boldsymbol{M}}_{22}}}& \cdots &{{\mathit{\boldsymbol{M}}_{2k}}}\\ \vdots & \vdots &{}& \vdots \\ {{\mathit{\boldsymbol{M}}_{k1}}}&{{\mathit{\boldsymbol{M}}_{k2}}}& \cdots &{{\mathit{\boldsymbol{M}}_{kk}}} \end{array}} \right), $

here Mii, i =1, 2, …, k, are the definite submatri-ces, then

$ \det\left( \mathit{\boldsymbol{M}} \right) \le \det\left( {{\mathit{\boldsymbol{M}}_{11}}} \right)\det\left( {{\mathit{\boldsymbol{M}}_{22}}} \right) \cdots \det \left( {{\mathit{\boldsymbol{M}}_{kk}}} \right). $

Proof   For k=2.set $\boldsymbol{N} = \left[{\begin{array}{*{20}{c}} {{\boldsymbol{M}_{11}}}&0\\ 0&{{\boldsymbol{M}_{22}}} \end{array}} \right]$, according to theorem 1, we have

$ \det\left( {{\mathit{\boldsymbol{N}}^{ - 1}}\mathit{\boldsymbol{M}}} \right) \le \det\left( {{\mathit{\boldsymbol{N}}^{ - 1}}} \right)\det\left( \mathit{\boldsymbol{M}} \right) = {{\rm{ \mathsf{ δ} }}_1}{{\rm{ \mathsf{ δ} }}_2} \cdots {{\rm{ \mathsf{ δ} }}_n} \le 1. $

That is,

$ \det\left( \mathit{\boldsymbol{M}} \right) \le \det\left( \mathit{\boldsymbol{N}} \right) = \det\left( {{\mathit{\boldsymbol{M}}_{11}}} \right)\det\left( {{\mathit{\boldsymbol{M}}_{22}}} \right). $

For k>2, using this manipulation successively gives

$ \det\left( \mathit{\boldsymbol{M}} \right) \le \det\left( {{\mathit{\boldsymbol{M}}_{11}}} \right)\det\left( {{\mathit{\boldsymbol{M}}_{22}}} \right) \cdots \det \left( {{\mathit{\boldsymbol{M}}_{kk}}} \right). $

The proof is completed.

It is clear that Hadamard inequality is the spe-cial case of Fischer inequality when k=n, so the following inequality holds.

Theorem 3   If M=(mij) ∈ Rn×nis a symmetric positive definite matrix, then

$ \det\left( \mathit{\boldsymbol{M}} \right) \le {m_{11}}{m_{22}} \cdots {m_{nn}}. $
4 Conclusions and future work

The eigenvalues of a class of matrices related to the real symmetric positive definite matrix are discussed in this paper, and an inequality about the eigenvalues is established.Using this result, the Fischer inequality and the Hadamard inequality of the positive definite matrix are proved.

S and T denote the subsets of set $\mathscr{W}$:= {1, 2, …, n} and $\mathscr{S}$ and $\mathscr{T}$ satisfy $\mathscr{S}$$\mathscr{T}$=$\mathscr{W}$.cd($\mathscr{S}$) denotes the cardinality of set $\mathscr{S}$.$\mathscr{S}$c represents the complementary set of $\mathscr{S}$.$\mathscr{M}$S denotes the principal submatrix determined by set $\mathscr{S}$.

Consider the following of the Koteljanskii, Fan and Szasz inequalities[15],

$ \begin{array}{*{20}{c}} {\det \left( \mathit{\boldsymbol{M}} \right) \le \frac{{\det \left( {{\mathit{\boldsymbol{M}}_S}} \right)\det \left( {{\mathit{\boldsymbol{M}}_T}} \right)}}{{\det \left( {{\mathit{\boldsymbol{M}}_{S \cup T}}} \right)}},}\\ {\det \left( \mathit{\boldsymbol{M}} \right) \le \det \left( {{\mathit{\boldsymbol{M}}_S}} \right)\prod\limits_{j \in {S^c}} {\frac{{\det \left( {{\mathit{\boldsymbol{M}}_{S \cup \left\{ j \right\}}}} \right)}}{{\det \left( {{\mathit{\boldsymbol{M}}_S}} \right)}}} ,}\\ {\prod\limits_{{\rm{cd}}\left( S \right) = j + 1} {\det {{\left( {{\mathit{\boldsymbol{M}}_S}} \right)}^{\left( {\begin{array}{*{20}{c}} {n - 1}\\ {j - 1} \end{array}} \right)}}} \le \prod\limits_{{\rm{cd}}\left( T \right) = j} {\det {{\left( {{\mathit{\boldsymbol{M}}_T}} \right)}^{\left( {\begin{array}{*{20}{c}} {n - 1}\\ j \end{array}} \right)}}} ,}\\ {j = 1,2, \cdots ,n - 1,} \end{array} $

here, MRn×nand $\left( \begin{array}{l} n\\ j \end{array} \right)$ denotes the combinatorial number.Using this method, these inequalities can be proved and it will be discussed in the future.

References
[1] WEIR T. Pre-invex functions in multiobjective optimization[J]. Journal of Mathematical Analysis and Applications, 1998, 136(1): 29–38.
[2] MOHAN S R, NEOGY S K. On invex sets and pre-invex functions[J]. Journal of Mathematical Analysis and Applications, 1995, 189(3): 901–908. DOI:10.1006/jmaa.1995.1057
[3] NOOR M A. Variational-like inequalities[J]. Optimization, 1994, 30(4): 323–330. DOI:10.1080/02331939408843995
[4] YANG X M, LI D. On properties of pre-invex functions[J]. Journal of Mathematical Analysis and Applications, 2001, 256(1): 229–241. DOI:10.1006/jmaa.2000.7310
[5] WANG S G, WU M X, JIA Z Z. The Matrix Inequalities[M]. 2nd ed. Beijing: Science Press, 2006.
[6] BELLMAN R. Introduction to Matrix Analysis[M]. NewYork: Mcgraw-Hill Book Company, 1970.
[7] ZENG C N, XU W X, ZHOU J Z. Several notes on Hadamard theorem[J]. Journal of Math, 2010, 30(1): 152–156.
[8] ZHANG X D, YANG S J. A note on Hadamard's inequality[J]. Acta Mathematicae Applicatae Sinca, 1997, 20(2): 269–274.
[9] LI X Y, LENG G S. Inverse forms of Hadamard inequality and Szasz inequality[J]. Journal of Natural Science of Hunan Normal University, 2007, 30(2): 19–21.
[10] ZHANG X D. Matrix Analysis and Applications[M]. Beijing: Tsinghua University Press, 2004.
[11] HORN R A, JOHNSON C R. Matrix Analysis[M]. Cambridge: Cambridge University Press, 1985.
[12] GOLUB G H, VAN LOAN C F. Matrix Computations[M]. 3rd ed. Baltimore, MD: Johns Hopkins University Press, 1996.
[13] YIN H C, ZHANG H M. Eigenvalues of a class of matrices related to the positive definite matrices[J]. Journal of Zhejiang University:Science Edition, 2014, 41(1): 1–5.
[14] ZHANG H M, DING F. A property of the eigenvalues of the symmetric positive definite matrix and the iterative algorithm for coupled Sylvester matrix equations[J]. Journal of the Franklin Institute, 2014, 351(1): 340–357. DOI:10.1016/j.jfranklin.2013.08.023
[15] GOVER E, KRIKORIAN N. Determinants and the volumes of parallelotopes and zonotopes[J]. Linear Algebra and Its Applications, 2010, 433(1): 28–40. DOI:10.1016/j.laa.2010.01.031