如何在pyspark中关闭科学记数法?

时间:2016-10-23 18:47:56

标签: apache-spark pyspark apache-spark-sql spark-dataframe

作为一些聚合的结果,我想出了以下sparkdataframe:

 ------------+-----------------+-----------------+
|sale_user_id|gross_profit     |total_sale_volume|
+------------+-----------------+-----------------+
|       20569|       -3322960.0|     2.12569482E8|
|       24269|       -1876253.0|      8.6424626E7|
|        9583|              0.0|       1.282272E7|
|       11722|          18229.0|        5653149.0|
|       37982|           6077.0|        1181243.0|
|       20428|           1665.0|        7011588.0|
|       41157|          73227.0|        1.18631E7|
|        9993|              0.0|        1481437.0|
|        9030|           8865.0|      4.4133791E7|
|         829|              0.0|          11355.0|
+------------+-----------------+-----------------+

,数据框的架构是:

root
 |-- sale_user_id: string (nullable = true)
 |-- tapp_gross_profit: double (nullable = true)
 |-- total_sale_volume: double (nullable = true)

如何在gross_profit和total_sale_volume列中禁用科学记数法?

1 个答案:

答案 0 :(得分:8)

最简单的方法是将双列转换为十进制,给出适当的precision and scale

private static final String REGISTER_REQUEST_URL = "http://finalproject.16mb.com/public_html/android/Register.php";
相关问题