计算具有不均匀间隔点的3D梯度

时间:2016-10-31 07:58:59

标签: python numpy scipy numerical-methods derivative

我目前每个不均匀间隔的粒子都有几百万的体积,每个粒子都有一个属性(潜在的,对于那些好奇的人)我想要计算局部力(加速度)。

np.gradient仅适用于均匀间隔的数据,我在这里查看:Second order gradient in numpy其中插值是必要的但我在Numpy中找不到3D样条实现。

一些代码将产生代表性数据:

import numpy as np    
from scipy.spatial import cKDTree

x = np.random.uniform(-10, 10, 10000)
y = np.random.uniform(-10, 10, 10000)
z = np.random.uniform(-10, 10, 10000)
phi = np.random.uniform(-10**9, 0, 10000)

kdtree = cKDTree(np.c_[x,y,z])
_, index = kdtree.query([0,0,0], 32) #find 32 nearest particles to the origin
#find the gradient at (0,0,0) by considering the 32 nearest particles?  

(我的问题与Function to compute 3D gradient with unevenly spaced sample locations非常相似,但似乎没有解决办法,所以我想我再问一次。)

任何帮助将不胜感激。

4 个答案:

答案 0 :(得分:2)

这是一个执行你所要求的Julia实现

using NearestNeighbors

n = 3;
k = 32; # for stability use  k > n*(n+3)/2

# Take a point near the center of cube
point = 0.5 + rand(n)*1e-3;
data = rand(n, 10^4);
kdtree = KDTree(data);
idxs, dists = knn(kdtree, point, k, true);

# Coords of the k-Nearest Neighbors
X = data[:,idxs];

# Least-squares recipe for coefficients
 C = point * ones(1,k); # central node
dX = X - C;  # diffs from central node
 G = dX' * dX;
 F =  G .* G;
 v = diag(G);
 N = pinv(G) * G;
 N = eye(N) - N;
 a =  N * pinv(F*N) * v;  # ...these are the coeffs

# Use a temperature distribution of  T = 25.4 * r^2
# whose analytical gradient is   gradT = 25.4 * 2*x
X2 = X .* X;
C2 = C .* C;
T  = 25.4 * n * mean(X2, 1)';
Tc = 25.4 * n * mean(C2, 1)'; # central node
dT = T - Tc;       # diffs from central node

y = dX * (a .* dT);   # Estimated gradient
g = 2 * 25.4 * point; # Analytical

# print results
@printf "Estimated  Grad  = %s\n" string(y')
@printf "Analytical Grad  = %s\n" string(g')
@printf "Relative Error   = %.8f\n" vecnorm(g-y)/vecnorm(g)


该方法具有大约1%的相对误差。以下是几次运行的结果......

Estimated  Grad  = [25.51670916224472 25.421038632006926 25.6711949674633]
Analytical Grad  = [25.41499027802736 25.44913042322385  25.448202594123806]
Relative Error   = 0.00559934

Estimated  Grad  = [25.310574056859014 25.549736360607493 25.368056350800604]
Analytical Grad  = [25.43200914200516  25.43243178887198  25.45061497749628]
Relative Error   = 0.00426558


更新
我不太了解Python,但这里有一个似乎正在运作的翻译

import numpy as np
from scipy.spatial import KDTree

n = 3;
k = 32;

# fill the cube with random points
data = np.random.rand(10000,n)
kdtree = KDTree(data)

# pick a point (at the center of the cube)
point = 0.5 * np.ones((1,n))

# Coords of k-Nearest Neighbors
dists, idxs = kdtree.query(point, k)
idxs = idxs[0]
X = data[idxs,:]

# Calculate coefficients
C = (np.dot(point.T, np.ones((1,k)))).T # central node
dX= X - C                    # diffs from central node
G = np.dot(dX, dX.T)
F = np.multiply(G, G)
v = np.diag(G);
N = np.dot(np.linalg.pinv(G), G)
N = np.eye(k) - N;
a = np.dot(np.dot(N, np.linalg.pinv(np.dot(F,N))), v)  # these are the coeffs

#  Temperature distribution is  T = 25.4 * r^2
X2 = np.multiply(X, X)
C2 = np.multiply(C, C)
T  = 25.4 * n * np.mean(X2, 1).T
Tc = 25.4 * n * np.mean(C2, 1).T # central node
dT = T - Tc;       # diffs from central node

# Analytical gradient ==>  gradT = 2*25.4* x
g = 2 * 25.4 * point;
print( "g[]: %s" % (g) )

# Estimated gradient
y = np.dot(dX.T, np.multiply(a, dT))
print( "y[]: %s,   Relative Error = %.8f" % (y, np.linalg.norm(g-y)/np.linalg.norm(g)) )


更新#2
我想我可以使用格式化的ASCII而不是LaTeX来编写可理解的内容......

`Given a set of M vectors in n-dimensions (call them b_k), find a set of
`coeffs (call them a_k) which yields the best estimate of the identity
`matrix and the zero vector
`
`                                 M
` (1) min ||E - I||,  where  E = sum  a_k b_k b_k
`     a_k                        k=1
`
`                                 M
` (2) min ||z - 0||,  where  z = sum  a_k b_k
`     a_k                        k=1
`
`
`Note that the basis vectors {b_k} are not required
`to be normalized, orthogonal, or even linearly independent.
`
`First, define the following quantities:
`
`  B             ==> matrix whose columns are the b_k
`  G = B'.B      ==> transpose of B times B
`  F = G @ G     ==> @ represents the hadamard product
`  v = diag(G)   ==> vector composed of diag elements of G
`
`The above minimizations are equivalent to this linearly constrained problem
`
`  Solve  F.a = v
`  s.t.   G.a = 0
`
`Let {X} denote the Moore-Penrose inverse of X.
`Then the solution of the linear problem can be written:
`
`  N = I - {G}.G       ==> projector into nullspace of G
`  a = N . {F.N} . v
`
`The utility of these coeffs is that they allow you to write
`very simple expressions for the derivatives of a tensor field.
`
`
`Let D be the del (or nabla) operator
`and d be the difference operator wrt the central (aka 0th) node,
`so that, for any scalar/vector/tensor quantity Y, we have:
`  dY = Y - Y_0
`
`Let x_k be the position vector of the kth node.
`And for our basis vectors, take
`  b_k = dx_k  =  x_k - x_0.
`
`Assume that each node has a field value associated with it
` (e.g. temperature), and assume a quadratic model [about x = x_0]
` for the field [g=gradient, H=hessian, ":" is the double-dot product]
`
`     Y = Y_0 + (x-x_0).g + (x-x_0)(x-x_0):H/2
`    dY = dx.g + dxdx:H/2
`   D2Y = I:H            ==> Laplacian of Y
`
`
`Evaluate the model at the kth node 
`
`    dY_k = dx_k.g  +  dx_k dx_k:H/2
`
`Multiply by a_k and sum
`
`     M               M                  M
`    sum a_k dY_k =  sum a_k dx_k.g  +  sum a_k dx_k dx_k:H/2
`    k=1             k=1                k=1
`
`                 =  0.g   +  I:H/2
`                 =  D2Y / 2
`
`Thus, we have a second order estimate of the Laplacian
`
`                M
`   Lap(Y_0) =  sum  2 a_k dY_k
`               k=1
`
`
`Now play the same game with a linear model
`    dY_k = dx_k.g
`
`But this time multiply by (a_k dx_k) and sum
`
`     M                    M
`    sum a_k dx_k dY_k =  sum a_k dx_k dx_k.g
`    k=1                  k=1
`
`                      =  I.g
`                      =  g
`
`
`In general, the derivatives at the central node can be estimated as
`
`           M
`    D#Y = sum  a_k dx_k#dY_k
`          k=1
`
`           M
`    D2Y = sum  2 a_k dY_k
`          k=1
`
` where
`   # stands for the {dot, cross, or tensor} product
`       yielding the {div, curl,  or grad} of Y
` and
`   D2Y stands for the Laplacian of Y
`   D2Y = D.DY = Lap(Y)

答案 1 :(得分:1)

直观地说,对于一个数据点的派生,我会做以下

  • 获取周围数据的一部分:data=phi[x_id-1:x_id+1, y_id-1:y_id+1, z_id-1:z_id+1]。使用kdTre的方法看起来非常好,当然你也可以将它用于数据的子集。
  • 安装3D多项式,您可能需要查看polyvander3D。将切片中间的点定义为中心。计算其他点的偏移量。将它们作为坐标传递给polyfit。
  • Derive您所在位置的多项式。

这将是解决您问题的简单方法。 然而,它可能会非常缓慢。

修改

事实上,这似乎是通常的方法:https://scicomp.stackexchange.com/questions/480/how-can-i-numerically-differentiate-an-unevenly-sampled-function

接受的答案是关于导出插值多项式。虽然显然多项式应该涵盖所有数据(Vandermonde矩阵)。对于你来说这是不可能的,太多的数据。采用本地子集似乎非常合理。

答案 2 :(得分:1)

很大程度上取决于潜在数据的信噪比。你的例子就是噪音,所以"拟合"任何东西都将永远过度贴合。"噪声程度将决定你想要多重拟合的程度(与lhk的答案一样)以及你想要克里金的程度(使用pyKriging或其他方式)

  1. 我建议使用query(x,distance_upper_bound)代替query(x,k),因为这可能会阻止因群集造成的一些不稳定性

  2. 我不是数学家,但我希望拟合多项式到距离相关的数据子集会在空间上不稳定,尤其是在多项式阶数增加时。这会使得到的渐变场不连续。

答案 3 :(得分:1)

我迟到了两美分。在空间均匀跨越和大的情况下,通常只提取每个粒子的局部信息。

您可能会注意到,有不同的方法来提取本地信息:

  1. N最近邻居,例如使用KD树。这可以动态定义局部性,这可能是也可能不是一个好主意。
  2. 使用平面随机分割空间以分组粒子。基本上测试N不等式以减少N次空间。
  3. 一旦定义了局部性,就可以插入一个多项式,它可以通过分析进行区分。我鼓励在不同的地方定义中进行更多思考。 (可能会产生有趣的差异)