计算4D矩阵平均值的替代方法

时间:2013-12-16 09:04:34

标签: matlab

我需要一些关于AAM的建议,例如我试图理解的编码。结果无法完成,因为发生错误,表示Matlab内存不足:

  

使用零时出错   内存不足。键入HELP MEMORY以获取选项。

导致错误的编码是:

Error in AAM_MakeSearchModel2D (line 6)
drdp=zeros(size(ShapeAppearanceData.Evectors,2)+4,6,length(TrainingData),length(AppearanceData.g_mean));

drpd中的实际数据是:

drdp=zeros(13,6,10,468249);

由于第4个数组很大,我可以理解的是我使用的是32位的Matlab内存不足。代码将产生的输出是2d。以下是稍后将使用drpd的代码:

drdpt=squeeze(mean(mean(drdp,3),2));
R=pinv(drdpt)';

我想问的问题是,是否可以将4D矩阵分成较小的矩阵(例如2D或3D)并进行正常的加法和除法(得到均值)。如果是,那怎么做呢?

编辑于2013年12月17日

我不能使用稀疏,因为4D drpd是一个初始化,用于获得另一个整体计算,将模型与真实的所有加权误差存储到drpd中。我复制了计算这个drpd的AAM函数的一部分:

function R=AAM_MakeSearchModel2D(ShapeAppearanceData,ShapeData,AppearanceData,TrainingData,options)


% Structure which will contain all weighted errors of model versus real
% intensities, by several offsets of the parameters
drdp=zeros(size(ShapeAppearanceData.Evectors,2)+4,6,length(TrainingData),length(AppearanceData.g_mean));

% We use the trainingdata images, to train the model. Because we want
% the background information to be included

% Loop through all training images
for i=1:length(TrainingData);
    % Loop through all model parameters, bot the PCA parameters as pose
    % parameters
    for j = 1:size(ShapeAppearanceData.Evectors,2)+4
        if(j<=size(ShapeAppearanceData.Evectors,2))
            % Model parameters, offsets
            de = [-0.5 -0.3 -0.1 0.1 0.3 0.5];

            % First we calculate the real ShapeAppearance parameters of the
            % training data set
            c = ShapeAppearanceData.Evectors'*(ShapeAppearanceData.b(:,i) -ShapeAppearanceData.b_mean);

            % Standard deviation form the eigenvalue
            c_std = sqrt(ShapeAppearanceData.Evalues(j));
            for k=1:length(de)
                % Offset the ShapeAppearance parameters with a certain
                % value times the std of the eigenvector
                c_offset=c;
                c_offset(j)=c_offset(j)+c_std *de(k);

                % Transform back from  ShapeAppearance parameters to Shape parameters  
                b_offset = ShapeAppearanceData.b_mean + ShapeAppearanceData.Evectors*c_offset;
                b1_offset = b_offset(1:(length(ShapeAppearanceData.Ws)));
                b1_offset= inv(ShapeAppearanceData.Ws)*b1_offset;
                x = ShapeData.x_mean + ShapeData.Evectors*b1_offset;
                pos(:,1)=x(1:end/2); 
                pos(:,2)=x(end/2+1:end);



                % Transform the Shape back to real image coordinates
                pos=AAM_align_data_inverse2D(pos,TrainingData(i).tform);

                % Get the intensities in the real image. Use those
                % intensities to get ShapeAppearance parameters, which
                % are then used to get model intensities
                [g, g_offset]=RealAndModel(TrainingData,i,pos, AppearanceData,ShapeAppearanceData,options,ShapeData);

                % A weighted sum of difference between model an real
                % intensities gives the "intensity / offset" ratio
                w = exp ((-de(k)^2) / (2*c_std^2))/de(k);
                drdp(j,k,i,:)=(g-g_offset)*w;
            end
        else
            % Pose parameters offsets
            j2=j-size(ShapeAppearanceData.Evectors,2);
            switch(j2)
                case 1 % Translation x
                    de = [-2 -1.2 -0.4 0.4 1.2 2]/2;
                case 2 % Translation y
                    de = [-2 -1.2 -0.4 0.4 1.2 2]/2;
                case 3 % Scaling & Rotation Sx
                    de = [-0.2 -.12 -0.04 0.04 0.12 0.2]/2;
                case 4 % Scaling & Rotation Sy
                    de = [-0.2 -.12 -0.04 0.04 0.12 0.2]/2;
            end

            for k=1:length(de)
                tform=TrainingData(i).tform;
                switch(j2)
                    case 1 % Translation x
                        tform.offsetv(1)=tform.offsetv(1)+de(k);
                    case 2 % Translation y
                        tform.offsetv(2)=tform.offsetv(2)+de(k);
                    case 3 % Scaling & Rotation Sx
                        tform.offsetsx=tform.offsetsx+de(k);
                    case 4 % Scaling & Rotation Sy
                        tform.offsetsy=tform.offsetsy+de(k);
                end

                % From Shape tot real image coordinates, with a certain
                % pose offset
                pos=AAM_align_data_inverse2D(TrainingData(i).CVertices,  tform);

                % Get the intensities in the real image. Use those
                % intensities to get ShapeAppearance parameters, which
                % are then used to get model intensities
                [g, g_offset]=RealAndModel(TrainingData,i,pos, AppearanceData,ShapeAppearanceData,options,ShapeData);

                % A weighted sum of difference between model an real
                % intensities gives the "intensity / offset" ratio
                w =exp ((-de(k)^2) / (2*2^2))/de(k);
                drdp(j,k,i,:)=(g-g_offset)*w;
            end
        end
    end
end

% Combine the data to the intensity/parameter matrix, 
% using a pseudo inverse
% for i=1:length(TrainingData);
%     drdpt=squeeze(mean(drdp(:,:,i,:),2));
%     R(:,:,i) = (drdpt * drdpt')\drdpt;
% end
% % Combine the data intensity/parameter matrix of all training datasets.
% %
% % In case of only a few images, it will be better to use a weighted mean
% % instead of the normal mean, depending on the probability of the trainingset
% R=mean(R,3);    

drdpt=squeeze(mean(mean(drdp,3),2));
R=pinv(drdpt)';
%R = (drdpt * drdpt')\drdpt;

正如您在函数的最终代码中看到的那样,4D drpd然后被挤压然后再次计算成为R中的2D矩阵存储。由于“Out of Memory”问题,该函数无法初始化drpd,因为它习惯了很多空间(drdp =零(13,6,10, 468249 ))。我可以以2D或3D形式存储数据(拆分drpd部分),然后进行简单的加法和除法以获得均值,然后最终得到'R'?

谢谢你,抱歉这个长期的问题。

1 个答案:

答案 0 :(得分:0)

如果drdp的许多元素保持为零,我想你想要使用一些稀疏表示 Matlab的sparse命令只能创建2D矩阵,所以这样的东西可能会起作用吗?

http://www.mathworks.com/matlabcentral/newsreader/view_thread/167669

一旦你开始工作,你可以担心计算手段 -
除了需要一些应该可行的簿记外。

相关问题