无法将PHP连接到SQLServer 2012

时间:2016-07-06 07:53:18

标签: php sql-server

我正在使用xampp而我正在尝试将我的PHP项目连接到SQL Server来生成报告。但是与 sqlsrv_connect()函数存在冲突。

我在XAMPP中安装的PHP版本是5.6.23,我的连接文件包含:

16/07/06 02:15:05 ERROR datasources.DynamicPartitionWriterContainer: Aborting task.
java.io.IOException: File already exists:s3://path/1839dd1ed38a.gz
 at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:614)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791)
 at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:177)
 at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
 at org.apache.spark.sql.execution.datasources.text.TextOutputWriter.<init>(DefaultSource.scala:156)
 at org.apache.spark.sql.execution.datasources.text.TextRelation$$anon$1.newInstance(DefaultSource.scala:125)
 at org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:129)
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.newOutputWriter$1(WriterContainer.scala:424)
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:356)
 at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
 at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
 at org.apache.spark.scheduler.Task.run(Task.scala:89)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
16/07/06 02:15:05 INFO output.DirectFileOutputCommitter: Nothing to clean up on abort since there are no temporary files written
16/07/06 02:15:05 ERROR datasources.DynamicPartitionWriterContainer: Task attempt attempt_201607060215_0004_m_001709_3 aborted.
16/07/06 02:15:05 ERROR executor.Executor: Exception in task 1709.3 in stage 4.0 (TID 12093)
org.apache.spark.SparkException: Task failed while writing rows.
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:414)
 at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
 at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
 at org.apache.spark.scheduler.Task.run(Task.scala:89)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: File already exists:s3://path/a984-1839dd1ed38a.gz
 at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:614)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791)
 at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:177)
 at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
 at org.apache.spark.sql.execution.datasources.text.TextOutputWriter.<init>(DefaultSource.scala:156)
 at org.apache.spark.sql.execution.datasources.text.TextRelation$$anon$1.newInstance(DefaultSource.scala:125)
 at org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:129)
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.newOutputWriter$1(WriterContainer.scala:424)
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:356)
 ... 8 more

另外,我已经下载了下一个文件,并在“C:\ xampp \ php”中提取。 PHP dll's

我已将“php.ini”文件添加到此代码中:

<?php
$serverName = "MyserverName\MyinstanceName"; //serverName\instanceName

$connectionInfo = array("Database"=>"Database_Example");
$conn = sqlsrv_connect($serverName, $connectionInfo);

if( $conn ) {
     echo "Conexión establecida.<br />";
}else{
     echo "La conexión no se pudo establecer.<br />";
     die( print_r( sqlsrv_errors(), true));
}
?>

我已经尝试过我的PHP的驱动程序的确切版本,但它不起作用。

Drivers Version

这是我尝试连接时出现的错误

在C:\ xampp \ htdocs \ Inventory \ conexion.proc.php中调用未定义函数sqlsrv_connect()

1 个答案:

答案 0 :(得分:2)

我已经解决了自己的问题!我为有同样问题的人写了答案。

驱动程序的版本必须与PHP相同,而不是优越。你必须提取 dll文件在&#34; C:\ xampp \ php \ ext&#34;不在&#34; C:\ xampp \ php&#34;。

之后会出现另一个问题: ODBC Driver 11

您必须在此链接上下载 ODBC驱动程序 11:https://www.microsoft.com/en-us/download/confirmation.aspx?id=36434

然后,当安装驱动程序时,带有查询的代码将起作用!