使用Express动态地将元标记注入HTML

时间:2017-01-18 16:45:18

标签: node.js express pug template-engine cheerio

要点:

我目前正在将Apache + PHP堆栈上的网站迁移到Node + Express,并且想知道在新版本下动态注入元标记的最佳方法/最佳实践(如果有的话)是什么叠加。

详细信息:

在现有堆栈下,通过直接将PHP代码添加到HTML文件中来动态注入元标记。由于渲染是在服务器端完成的,因此Facebook / Google + /任何网络抓取工具都可以正确解释标签。

在新筹码下,经过一些研究后,我发现了两个选择:

  1. 使用像Pug(Jade)这样的模板引擎来呈现带有本地的HTML。 (用Pug的语法重写现有HTML似乎有点过头了?Pug可以处理HTML,还是我考虑像EJS这样的其他模板引擎?你建议我去探索什么样的模板引擎?)
  2. 在渲染开始之前,使用像Cheerio这样的DOM操作插件首先注入元标记。
  3. 在这两个选项之间,哪一个会有更好的表现还是没有实质性差异?您还有其他方式可以推荐吗?谢谢!

1 个答案:

答案 0 :(得分:4)

EJS可能是最简单的,与PHP非常相似。

您还可以查看Mustache和Handlebars以获取其他选项,只需对现有HTML进行最少的更改。

  • 与EJS: import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data #Imported Data set mnist = input_data.read_data_sets("/tmp/data/", one_hot = True) #ammount of output classes n_classes = 10 #ammount of examples processed at once #memory impact of ~500MB for 128 with more on eval runs batch_size = 128 #Times to cycle through the entire imput data set epoch_amm =20 #Input and outputs placeholders x = tf.placeholder(tf.float32, [None, 784]) y = tf.placeholder(tf.float32) #Dropout is 1-keeprate; fc- fully conected layer dropout;conv conv layer droupout keep_rate_fc=.5 keep_rate_conv=.75 keep_prob=tf.placeholder(tf.float32) #Regularization paramaters Regularization_active= False #True and False MUST be capitalized Lambda= 1.0 #'weight' of the weights on the loss function # counter for total steps taken by trainer training_steps = 1 #Learning Rate For Network base_Rate = .03 decay_steps = 64 decay_rate = .96 Staircase = True Learning_Rate = tf.train.exponential_decay(base_Rate, training_steps, decay_steps, decay_rate, staircase='Staircase', name='Exp_decay' ) #Convolution Function returns neuronns that act on a section of prev. layer def conv2d(x,W): return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME') #Pooling function returns max value in 2 by 2 sections def maxpool2d(x): return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME') def relu(x): return tf.nn.relu(x,'relu') def add(x, b): return tf.add(x,b) #'Main' method, contains the Neural Network def convolutional_neural_network(x): weights = {'W_conv1':tf.Variable(tf.random_normal([5,5,1,32])), 'W_conv2':tf.Variable(tf.random_normal([5,5,32,64])), 'W_fc':tf.Variable(tf.random_normal([7*7*64,1024])), 'W_out':tf.Variable(tf.random_normal([1024,n_classes]))} biases = {'B_conv1':tf.Variable(tf.random_normal([32])), 'B_conv2':tf.Variable(tf.random_normal([64])), 'B_fc':tf.Variable(tf.random_normal([1024])), 'B_out':tf.Variable(tf.random_normal([n_classes]))} # Input layer x = tf.reshape(x, shape=[-1,28,28,1]) #first layer. pass inputs through conv2d and save as conv1 then apply maxpool2d conv1 = conv2d(x,weights['W_conv1']) conv1 = add(conv1,biases['B_conv1']) conv1 = relu(conv1) conv1 = maxpool2d(conv1) conv1 = tf.nn.dropout(conv1,keep_rate_conv) #second layer does same as first layer conv2 = conv2d(conv1,weights['W_conv2']) conv2 = add(conv2,biases['B_conv2']) conv2 = relu(conv2) conv2 = maxpool2d(conv2) conv2 = tf.nn.dropout(conv2,keep_rate_conv) #3rd layer fully connected fc = tf.reshape(conv2,[-1,7*7*64]) fc = tf.matmul(fc,weights['W_fc']) fc = add(fc,biases['B_fc']) fc = relu(fc) fc = tf.nn.dropout(fc,keep_rate_fc) #4th and final layer output = tf.matmul(fc,weights['W_out']) output = add(output,biases['B_out']) return output #Trains The neural Network def train_neural_network(x): training_steps = 0 #Initiate The Network prediction = convolutional_neural_network(x) #Define the Cost and Cost function #tf.reduce_mean averages the values of a tensor into one value cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) ) #Apply Regularization if active #if Regularization_active : # print('DEBUG!! LINE 84 REGULARIZATION ACTIVE') # cost = (tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y))+ # (Lambda*(tf.nn.l2_loss(weight['W_conv1'])+ # tf.nn.l2_loss(weight['W_conv2'])+ # tf.nn.l2_loss(weight['W_fc'])+ # tf.nn.l2_loss(weight['W_out'])+ # tf.nn.l2_loss(biases['B_conv1'])+ # tf.nn.l2_loss(biases['B_conv2'])+ # tf.nn.l2_loss(biases['B_fc'])+ # tf.nn.l2_loss(biases['B_out'])))) #Optimizer + Learning_Rate passthrough optimizer = tf.train.AdamOptimizer().minimize(cost) #Get Epoch Ammount hm_epochs = epoch_amm #Starts C++ Training session print('Session Started') with tf.Session() as sess: #Initiate all Variables sess.run(tf.global_variables_initializer()) #Begin Logs summary_writer = tf.summary.FileWriter('/tmp/logs',sess.graph) #Start Training for epoch in range(hm_epochs): epoch_loss = 0 for count in range(int(mnist.train.num_examples/batch_size)): training_steps = (training_steps+1) epoch_x, epoch_y = mnist.train.next_batch(batch_size) count, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y}) epoch_loss += c print('Epoch', epoch, 'current epoch loss', epoch_loss, 'batch loss', c,'ts',training_steps,' ', end='\r') #Log the loss per epoch print('Epoch', epoch, 'completed out of',hm_epochs,'loss:',epoch_loss,' ') acc_total = 0 correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) for _ in range(int(mnist.test.num_examples/batch_size)): test_x, test_y = mnist.test.next_batch(batch_size) acc = accuracy.eval(feed_dict={x: test_x, y: test_y}) acc_total += acc print('Accuracy:',acc_total*batch_size/float(mnist.test.num_examples),end='\r') print('Epoch', epoch, 'current test set accuracy : ',acc_total*batch_size/float(mnist.test.num_examples)) acc_total=0 correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) for _ in range(int(mnist.train.num_examples/batch_size)): train_x, train_y = mnist.train.next_batch(batch_size) acc = accuracy.eval(feed_dict={x: train_x, y: train_y}) acc_total += acc print('Accuracy:',acc_total*batch_size/float(mnist.train.num_examples),end='\r') print('Epoch', epoch, 'current train set accuracy : ',acc_total*batch_size/float(mnist.test.num_examples)) print('Complete') sess.close() #Run the Neural Network train_neural_network(x)
  • with Mustache:<html><head><%= yourMetaTags %> ...
  • with Handlebars:<html><head>{{ yourMetaTags }} ...

doT.js也很快。

请参阅:

解析HTML并使用DOM API操作它只是为了插入元标记在我看来是一种矫枉过正。

另一方面,如果您只需要插入元标记,那么您可以使用类似<html><head>{{ yourMetaTags }} ...的内容进行简单的正则表达式替换,但是当您需要更多功能时,它可能会随着时间的推移而变得更加复杂。毕竟,每个人都在生活的某个阶段制造了一个模板引擎。