目录
一、常用指标总览
1.1 指标分类
二、 指标解析与python代码实现(部分)
2.1 基于信息熵的评估指标
A. 信息熵:/Entropy/EN
B. 交叉熵
C.相关熵:Mutual Information/MI
D.峰值信噪比/Peak signal-to-noise ratio/PSNR
E. 基于边缘信息的指标( QAB/F)
2.2 基于图像特征的评估指标
A. 标准差:Standard Deviation/SD
B.平均梯度/Average Gradient/AG
C.空间频率/Spatial Frequency/SF
2.3 基于结构的评估指标
A. 结构相似度准则/Structural similarity index measure/SSIM
B. 多尺度结构相似性 /Multi Scale Structural Similarity Index Measure/MS-SSIM
C. 平均绝对误差/Mean Absolute Error/MAE
2.4 基于人类视觉感知的指标
A. 视觉信息保真度\Visual Information Fidelity for Fusion\ VIFF
2.5 基于相关性的指标
A. 相关系数/Correlation coefficient/CC
B. 差异相关性总和/ sum of correlation differ-ences/ SCD
一、常用指标总览
评估指标 | 缩写 |
信息熵 | EN |
交叉熵 | CE |
互信息 | MI |
峰值信噪比 | PSNR |
空间频率 | SF |
标准差 | SD |
均方误差 | MSE |
视觉保真度 | VIF |
平均梯度 | AG |
相关系数 | CC |
差异相关和 | SCD |
基于梯度的融合性能 | Qabf |
结构相似度测量 | SSIM |
多尺度结构相似度测量 | MS-SSIM |
基于噪声评估的融合性能 | Nabf |
1.1 指标分类
性能评估指标主要分为:
- 基于信息熵的评估指标,主要包括:EN、MI、PSNR、Qabf、Nabf
- 基于图像结构的评估指标,主要包括SSIM、MS_SSIM、MSE
- 基于图像特征的评估指标, 主要包括SD、SF、AG,
- 基于人类视觉感知的评估指标,主要包括VIF,
- 基于相关性的评估指标,主要包括CC、SCD。
二、 指标解析与python代码实现(部分)
2.1 基于信息熵的评估指标
A. 信息熵:/Entropy/EN
表示一张图像包含信息的丰富程度【越大越好】
对于一张灰度图,像素的取值为0-255,那么L = 256 , p_i 表示灰度值为i 的概率,可以由灰度值为i 的像素个数N_i 与所有像素数目N之比计算,即。对于RGB图像一般是将其转为灰度图再计算熵。
代码:
方法1:
def EN_function(image_array):# 计算图像的直方图histogram, bins = np.histogram(image_array, bins=256, range=(0, 255))# 将直方图归一化histogram = histogram / float(np.sum(histogram))# 计算熵entropy = -np.sum(histogram * np.log2(histogram + 1e-7))return entropy
方法2:
@classmethoddef EN(cls, img): # entropycls.input_check(img)a = np.uint8(np.round(img)).flatten()h = np.bincount(a) / a.shape[0]return -sum(h * np.log2(h + (h == 0)))
np.round():对括号中的内容进行四舍五入的操作
使用 np.bincount 函数计算数组 a 中每个整数值出现的次数,并除以总像素数 a.shape[0],得到每个整数值的概率分布。
B. 交叉熵
交叉熵表示生成图像与源图像信息的差异。其中A AA是源图像,B BB是生成图像,公式如下:
如果源图像有多张,例如可见光与红外图像的融合,由两张源图像A 、 B 生成一张图像F 则:
C.相关熵:Mutual Information/MI
相关熵或相关信息量M I 【越大越好】是反应生成图像和源图像的像素分布的相似程度。其中A 是源图像,B 是生成图像,公式如下:
其中p_{B,A} 表示B 、 A 的联合概率密度。对于p_{A} 或者 p_{B} 来说,一共有{ 0 , 1 , . . . , 255 },共256个像素取值的概率,因此,对于 p_{B,A}来说,一共有256×256个像素取值的概率,分别为{ ( 0 , 0 ) , ( 0 , 1 ) , . . . , ( 255 , 255 ) } 。
简化为:
MI(X;Y)=H(X)+H(Y)−H(X,Y)
H(X) 和 H(Y) 分别是变量 X 和 Y 的熵,H(X,Y) 是它们的联合熵
def Hab(im1, im2, gray_level):hang, lie = im1.shapecount = hang * lieN = gray_levelh = np.zeros((N, N))# 步骤1: 计算联合直方图for i in range(hang):for j in range(lie):h[im1[i, j], im2[i, j]] = h[im1[i, j], im2[i, j]] + 1h = h / np.sum(h) # 归一化# 步骤2: 计算边缘概率分布im1_marg = np.sum(h, axis=0) # X方向概率im2_marg = np.sum(h, axis=1) # Y方向概率# 步骤3: 计算X的熵 H(X)H_x = 0for i in range(N):if (im1_marg[i] != 0):H_x = H_x + im1_marg[i] * math.log2(im1_marg[i])H_x = -H_x # 注意公式中的负号# 步骤4: 计算Y的熵 H(Y)H_y = 0for i in range(N):if (im2_marg[i] != 0):H_y = H_y + im2_marg[i] * math.log2(im2_marg[i])H_y = -H_y # 注意公式中的负号# 步骤5: 计算联合熵 H(X,Y)H_xy = 0for i in range(N):for j in range(N):if (h[i, j] != 0):H_xy = H_xy + h[i, j] * math.log2(h[i, j])H_xy = -H_xy # 注意公式中的负号# 步骤6: 计算互信息 MI = H(X) + H(Y) - H(X,Y)MI = H_x + H_y - H_xyreturn MIdef MI_function(A, B, F, gray_level=256):MIA = Hab(A, F, gray_level) # 计算A和F之间的互信息MIB = Hab(B, F, gray_level) # 计算B和F之间的互信息MI_results = MIA + MIB # 联合互信息return MI_results
方法2:
def MI(cls, image_F, image_A, image_B):cls.input_check(image_F, image_A, image_B)return skm.mutual_info_score(image_F.flatten(), image_A.flatten()) + skm.mutual_info_score(image_F.flatten(),image_B.flatten())
D.峰值信噪比/Peak signal-to-noise ratio/PSNR
PSNR是通过峰值功率与噪声功率之比来反映失真的度量【越大越好】:
其中r 是融合图像的峰值,一般设置为256。MSE是测量融合图像与源图像之间差异的均方误差,定义如下:
方法1:
def PSNR_function(A, B, F):
# 归一化处理A = A / 255.0B = B / 255.0F = F / 255.0m, n = F.shape
# 平均?MSE_AF = np.sum(np.sum((F - A)**2))/(m*n)MSE_BF = np.sum(np.sum((F - B)**2))/(m*n)MSE = 0.5 * MSE_AF + 0.5 * MSE_BFPSNR = 20 * np.log10(255/np.sqrt(MSE))return PSNR
方法2:
@classmethoddef PSNR(cls, image_F, image_A, image_B):cls.input_check(image_F, image_A, image_B)return 10 * np.log10(np.max(image_F) ** 2 / cls.MSE(image_F, image_A, image_B))
E. 基于边缘信息的指标( QAB/F)
图像融合网络的通用评估指标_图像融合评价指标-CSDN博客
A new quality metric for image fusion | IEEE Conference Publication | IEEE Xplore
测量从源图像转移到融合图像的边缘信息
方法2:
@classmethoddef Qabf(cls, image_F, image_A, image_B):cls.input_check(image_F, image_A, image_B)gA, aA = cls.Qabf_getArray(image_A)gB, aB = cls.Qabf_getArray(image_B)gF, aF = cls.Qabf_getArray(image_F)QAF = cls.Qabf_getQabf(aA, gA, aF, gF)QBF = cls.Qabf_getQabf(aB, gB, aF, gF)# 计算QABFdeno = np.sum(gA + gB)nume = np.sum(np.multiply(QAF, gA) + np.multiply(QBF, gB))return nume / deno@classmethoddef Qabf_getArray(cls,img):# 使用Sobel算子计算梯度h1 = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]]).astype(np.float32)h2 = np.array([[0, 1, 2], [-1, 0, 1], [-2, -1, 0]]).astype(np.float32)h3 = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]]).astype(np.float32)SAx = convolve2d(img, h3, mode='same') # x方向梯度SAy = convolve2d(img, h1, mode='same') # y方向梯度gA = np.sqrt(np.multiply(SAx, SAx) + np.multiply(SAy, SAy)) # 梯度幅值aA = np.zeros_like(img) # 梯度方向aA[SAx == 0] = math.pi / 2aA[SAx != 0] = np.arctan(SAy[SAx != 0] / SAx[SAx != 0])return gA, aA@classmethoddef Qabf_getQabf(cls,aA, gA, aF, gF):# 模型参数L = 1Tg = 0.9994kg = -15Dg = 0.5Ta = 0.9879ka = -22Da = 0.8# 初始化结果数组GAF,AAF,QgAF,QaAF,QAF = np.zeros_like(aA),np.zeros_like(aA),np.zeros_like(aA),np.zeros_like(aA),np.zeros_like(aA)# 计算梯度幅值相似度GAF[gA>gF]=gF[gA>gF]/gA[gA>gF]GAF[gA == gF] = gF[gA == gF]GAF[gA <gF] = gA[gA<gF]/gF[gA<gF]# 计算梯度方向相似度AAF = 1 - np.abs(aA - aF) / (math.pi / 2)# 使用S形函数将相似度映射到[0,1]范围QgAF = Tg / (1 + np.exp(kg * (GAF - Dg)))QaAF = Ta / (1 + np.exp(ka * (AAF - Da)))# 综合幅值和方向相似度QAF = QgAF* QaAFreturn QAF
2.2 基于图像特征的评估指标
A. 标准差:Standard Deviation/SD
图像的标准差表示图像的像素值相对于图像平均像素值的偏移程度,衡量数据集中数据分布或离散程度的统计量。
在图像处理中,标准差通常用来评价图像的对比度和清晰度,因此也可以用来评价融合图像的质量。
或
或无偏估计【方法2】
方法1:
def SD_function(image_array):m, n = image_array.shapeu = np.mean(image_array)SD = np.sqrt(np.sum(np.sum((image_array - u) ** 2)) / (m * n))return SD
方法2:
def SD(cls, img):cls.input_check(img)return np.std(img)
B.平均梯度/Average Gradient/AG
平均梯度能够反映图像的纹理信息,它通常用于评估图像的锐度。一张M × N 的图像,平均梯度计算如下:
分别表示水平梯度值和垂直梯度值。平均梯度值越大,图像包含的信息越丰富,融合结果越好。
方法1:
def AG_function(image):width = image.shape[1]width = width - 1height = image.shape[0]height = height - 1tmp = 0.0[grady, gradx] = np.gradient(image)s = np.sqrt((np.square(gradx) + np.square(grady)) / 2)AG = np.sum(np.sum(s)) / (width * height)return AG
方法2:
def AG(cls, img): # Average gradientcls.input_check(img)Gx, Gy = np.zeros_like(img), np.zeros_like(img)Gx[:, 0] = img[:, 1] - img[:, 0]Gx[:, -1] = img[:, -1] - img[:, -2]Gx[:, 1:-1] = (img[:, 2:] - img[:, :-2]) / 2Gy[0, :] = img[1, :] - img[0, :]Gy[-1, :] = img[-1, :] - img[-2, :]Gy[1:-1, :] = (img[2:, :] - img[:-2, :]) / 2return np.mean(np.sqrt((Gx ** 2 + Gy ** 2) / 2))
C.空间频率/Spatial Frequency/SF
SF是基于图像梯度反应图像的细节和纹理,与平均梯度基本相同。首先计算横向频率RF和纵向频率CF再计算SF:
方法1:
def SF_function(image):image_array = np.array(image)RF = np.diff(image_array, axis=0)RF1 = np.sqrt(np.mean(np.mean(RF ** 2)))CF = np.diff(image_array, axis=1)CF1 = np.sqrt(np.mean(np.mean(CF ** 2)))SF = np.sqrt(RF1 ** 2 + CF1 ** 2)return SF
方法2:
def SF(cls, img):cls.input_check(img)return np.sqrt(np.mean((img[:, 1:] - img[:, :-1]) ** 2) + np.mean((img[1:, :] - img[:-1, :]) ** 2))
2.3 基于结构的评估指标
A. 结构相似度准则/Structural similarity index measure/SSIM
图像质量评估指标——SSIM介绍及计算方法-CSDN博客
SSIM是广泛使用的度量标准,它根据两个图像在亮度,对比度和结构上的相似性。从数学上讲,图像x和y之间的SSIM如下定义:
方法1:SSIM单独定义,详见末尾参考链接
def SSIM_function(A, B, F):ssim_A = ssim(A, F)ssim_B = ssim(B, F)SSIM = 1 * ssim_A + 1 * ssim_Breturn SSIM.item()
方法2:
# @classmethod# def SSIM(cls, image_F, image_A, image_B):# cls.input_check(image_F, image_A, image_B)# return ssim(image_F,image_A)+ssim(image_F,image_B)@classmethoddef SSIM(cls,image_F, image_A, image_B):# 假设图像已经归一化并且在 [0, 1] 范围内data_range = 1.0 # 如果您的图像有不同的范围,请更新这个值cls.input_check(image_F, image_A, image_B)# 计算 image_F 和 image_A,以及 image_F 和 image_B 之间的 SSIMssim_score_A = ssim(image_F, image_A, data_range=data_range)ssim_score_B = ssim(image_F, image_B, data_range=data_range)# 返回 SSIM 的平均值return (ssim_score_A + ssim_score_B) / 2
B. 多尺度结构相似性 /Multi Scale Structural Similarity Index Measure/MS-SSIM
MS-SSIM(多尺度结构相似性指数)是SSIM(结构相似性指数)的扩展版本,核心思想是在不同的分辨率层次上分别计算SSIM,组合起来形成最终评分
SSIM与MS-SSIM图像评价函数-CSDN博客
C. 平均绝对误差/Mean Absolute Error/MAE
用于衡量两幅图像之间的差异【越小越好】
方法1:
def MSE_function(A, B, F):A = A / 255.0B = B / 255.0F = F / 255.0m, n = F.shapeMSE_AF = np.sum(np.sum((F - A)**2))/(m*n)MSE_BF = np.sum(np.sum((F - B)**2))/(m*n)MSE = 0.5 * MSE_AF + 0.5 * MSE_BFreturn MSE
方法2:
@classmethoddef MSE(cls, image_F, image_A, image_B): # MSEcls.input_check(image_F, image_A, image_B)return (np.mean((image_A - image_F) ** 2) + np.mean((image_B - image_F) ** 2)) / 2
2.4 基于人类视觉感知的指标
A. 视觉信息保真度\Visual Information Fidelity for Fusion\ VIFF
VIF与主观视觉有更高的一致性。其值越大,表明图像质量越好
方法1:
def fspecial_gaussian(shape, sigma):"""2D gaussian mask - should give the same result as MATLAB's fspecial('gaussian',...)"""m, n = [(ss-1.)/2. for ss in shape]y, x = np.ogrid[-m:m+1, -n:n+1]h = np.exp(-(x*x + y*y) / (2.*sigma*sigma))h[h < np.finfo(h.dtype).eps*h.max()] = 0sumh = h.sum()if sumh != 0:h /= sumhreturn hdef vifp_mscale(ref, dist):sigma_nsq = 2num = 0den = 0for scale in range(1, 5):N = 2**(4-scale+1)+1win = fspecial_gaussian((N, N), N/5)if scale > 1:ref = convolve2d(ref, win, mode='valid')dist = convolve2d(dist, win, mode='valid')ref = ref[::2, ::2]dist = dist[::2, ::2]mu1 = convolve2d(ref, win, mode='valid')mu2 = convolve2d(dist, win, mode='valid')mu1_sq = mu1*mu1mu2_sq = mu2*mu2mu1_mu2 = mu1*mu2sigma1_sq = convolve2d(ref*ref, win, mode='valid') - mu1_sqsigma2_sq = convolve2d(dist*dist, win, mode='valid') - mu2_sqsigma12 = convolve2d(ref*dist, win, mode='valid') - mu1_mu2sigma1_sq[sigma1_sq<0] = 0sigma2_sq[sigma2_sq<0] = 0g = sigma12 / (sigma1_sq + 1e-10)sv_sq = sigma2_sq - g*sigma12g[sigma1_sq<1e-10] = 0sv_sq[sigma1_sq<1e-10] = sigma2_sq[sigma1_sq<1e-10]sigma1_sq[sigma1_sq<1e-10] = 0g[sigma2_sq<1e-10] = 0sv_sq[sigma2_sq<1e-10] = 0sv_sq[g<0] = sigma2_sq[g<0]g[g<0] = 0sv_sq[sv_sq<=1e-10] = 1e-10num += np.sum(np.log10(1+g**2 * sigma1_sq/(sv_sq+sigma_nsq)))den += np.sum(np.log10(1+sigma1_sq/sigma_nsq))vifp = num/denreturn vifpdef VIF_function(A, B, F):VIF = vifp_mscale(A, F) + vifp_mscale(B, F)return VIF
方法2:
@classmethod
def VIFF(cls, image_F, image_A, image_B):cls.input_check(image_F, image_A, image_B)return cls.compare_viff(image_A, image_F) + cls.compare_viff(image_B, image_F)@classmethod
def compare_viff(cls, ref, dist): # 计算一对图像的VIFF值sigma_nsq = 2 # 噪声方差参数eps = 1e-10 # 防止除零的小常数num = 0.0 # 分子累加器den = 0.0 # 分母累加器# 多尺度分析(4个尺度)for scale in range(1, 5):# 创建高斯核N = 2 ** (4 - scale + 1) + 1 # 核大小sd = N / 5.0 # 标准差# 生成二维高斯核m, n = [(ss - 1.) / 2. for ss in (N, N)]y, x = np.ogrid[-m:m + 1, -n:n + 1]h = np.exp(-(x * x + y * y) / (2. * sd * sd))h[h < np.finfo(h.dtype).eps * h.max()] = 0win = h / h.sum() # 归一化# 降采样(多尺度处理)if scale > 1:ref = convolve2d(ref, np.rot90(win, 2), mode='valid')dist = convolve2d(dist, np.rot90(win, 2), mode='valid')ref = ref[::2, ::2] # 隔行隔列采样dist = dist[::2, ::2]# 计算局部统计量mu1 = convolve2d(ref, np.rot90(win, 2), mode='valid') # 参考图像均值mu2 = convolve2d(dist, np.rot90(win, 2), mode='valid') # 失真图像均值mu1_sq = mu1 * mu1mu2_sq = mu2 * mu2mu1_mu2 = mu1 * mu2# 计算局部方差和协方差sigma1_sq = convolve2d(ref * ref, np.rot90(win, 2), mode='valid') - mu1_sqsigma2_sq = convolve2d(dist * dist, np.rot90(win, 2), mode='valid') - mu2_sqsigma12 = convolve2d(ref * dist, np.rot90(win, 2), mode='valid') - mu1_mu2# 确保方差非负sigma1_sq[sigma1_sq < 0] = 0sigma2_sq[sigma2_sq < 0] = 0# 计算增益因子和剩余方差g = sigma12 / (sigma1_sq + eps) # 信号增益sv_sq = sigma2_sq - g * sigma12 # 剩余方差# 处理特殊情况(防止数值不稳定)g[sigma1_sq < eps] = 0sv_sq[sigma1_sq < eps] = sigma2_sq[sigma1_sq < eps]sigma1_sq[sigma1_sq < eps] = 0g[sigma2_sq < eps] = 0sv_sq[sigma2_sq < eps] = 0sv_sq[g < 0] = sigma2_sq[g < 0]g[g < 0] = 0sv_sq[sv_sq <= eps] = eps# 累加分子和分母num += np.sum(np.log10(1 + g * g * sigma1_sq / (sv_sq + sigma_nsq)))den += np.sum(np.log10(1 + sigma1_sq / sigma_nsq))# 计算最终VIFF值vifp = num / denif np.isnan(vifp):return 1.0 # 处理NaN情况else:return vifp
2.5 基于相关性的指标
A. 相关系数/Correlation coefficient/CC
CC反映源图像和融合图像之间的线性相关程度。相关系数用于衡量两个变量之间的线性关系强度,取值范围为[-1, 1],值为1表示完全正相关,值为-1表示完全负相关,值为0表示无相关性。
方法1:
def CC_function(A,B,F):rAF = np.sum((A - np.mean(A)) * (F - np.mean(F))) / np.sqrt(np.sum((A - np.mean(A)) ** 2) * np.sum((F - np.mean(F)) ** 2))rBF = np.sum((B - np.mean(B)) * (F - np.mean(F))) / np.sqrt(np.sum((B - np.mean(B)) ** 2) * np.sum((F - np.mean(F)) ** 2))CC = np.mean([rAF, rBF])return CC
方法2:
def CC(cls, image_F, image_A, image_B):cls.input_check(image_F, image_A, image_B)rAF = np.sum((image_A - np.mean(image_A)) * (image_F - np.mean(image_F))) / np.sqrt((np.sum((image_A - np.mean(image_A)) ** 2)) * (np.sum((image_F - np.mean(image_F)) ** 2)))rBF = np.sum((image_B - np.mean(image_B)) * (image_F - np.mean(image_F))) / np.sqrt((np.sum((image_B - np.mean(image_B)) ** 2)) * (np.sum((image_F - np.mean(image_F)) ** 2)))return (rAF + rBF) / 2
B. 差异相关性总和/ sum of correlation differ-ences/ SCD
测量融合图像与源图像的差异来表征融合算法优劣,DX,F 表示融合图像 F 与源图像 X 的差分图像。 SCD 越高,意味着融合图像包含源图像中的信息越丰富
方法1:
def corr2(a, b):a = a - np.mean(a)b = b - np.mean(b)r = np.sum(a * b) / np.sqrt(np.sum(a * a) * np.sum(b * b))return rdef SCD_function(A, B, F):r = corr2(F - B, A) + corr2(F - A, B)return r
方法2:
@classmethoddef corr2(a, b):a = a - np.mean(a)b = b - np.mean(b)r = np.sum(a * b) / np.sqrt(np.sum(a * a) * np.sum(b * b))return r@classmethoddef SCD(cls, image_F, image_A, image_B): # The sum of the correlations of differencescls.input_check(image_F, image_A, image_B)r = cls.corr2(image_F - image_B, image_A) + cls.corr2(image_F - image_A, image_B)return r
参考:
方法1:图像融合评估指标Python版_图像融合评价指标python-CSDN博客
方法2:红外与可见光图像融合评价指标(cddfusion中的代码Evaluator.py)-CSDN博客