Console Output

Skipping 26,366 KB.. Full Log
[info] - 2.1: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (62 milliseconds)
[info] - 2.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (179 milliseconds)
[info] - 2.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (169 milliseconds)
[info] - 2.1: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (244 milliseconds)
06:09:19.414 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
06:09:24.262 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
06:09:24.262 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.24
06:09:24.275 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - 2.1: create client with sharesHadoopClasses = false (7 seconds, 76 milliseconds)
[info] HivePartitionFilteringSuite(2.2):
06:09:31.756 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
06:09:31.756 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.24
06:09:31.769 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
06:09:32.573 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:09:51.121 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
06:09:51.121 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.24
06:09:52.217 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:10:06.283 WARN org.apache.spark.sql.hive.client.Shim_v2_2: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:810)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:439)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:356)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:278)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:583)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3030)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2582)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:176)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2963)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2947)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2772)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:2965)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2704)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy361.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:4821)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy362.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1228)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
	at com.sun.proxy.$Proxy363.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:2577)
	... 68 more
[info] - 2.2: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (21 seconds, 387 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds<=>20170101 (244 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 (231 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (156 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk='aa' (196 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (106 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (87 milliseconds)
[info] - 2.2: getPartitionsByFilter: 20170101=ds (103 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 and h=2 (206 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (69 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 or ds=20170102 (148 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (102 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (60 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (69 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (61 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (136 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (62 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (196 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (166 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (227 milliseconds)
06:10:11.079 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
06:10:16.120 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
06:10:16.120 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.24
06:10:16.131 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - 2.2: create client with sharesHadoopClasses = false (7 seconds, 493 milliseconds)
[info] HivePartitionFilteringSuite(2.3):
06:10:18.702 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:10:18.702 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:10:24.004 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
06:10:24.004 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore jenkins@192.168.10.24
06:10:24.025 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
06:10:24.675 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:10:24.788 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:10:24.788 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:10:24.789 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:10:43.976 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:10:43.976 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:10:47.412 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
06:10:47.412 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore jenkins@192.168.10.24
06:10:48.498 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:10:48.661 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:10:48.661 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:10:48.661 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:11:06.283 WARN org.apache.spark.sql.hive.client.Shim_v2_3: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:810)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:439)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:356)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:278)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:583)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3315)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2768)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:182)
	at org.apache.hadoop.hive.metastore.ObjectStore$7.getJdoResult(ObjectStore.java:3248)
	at org.apache.hadoop.hive.metastore.ObjectStore$7.getJdoResult(ObjectStore.java:3232)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2974)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3250)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2906)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy390.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5093)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy391.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1232)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
	at com.sun.proxy.$Proxy392.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:2679)
	... 68 more
[info] - 2.3: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (24 seconds, 696 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds<=>20170101 (219 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 (270 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (195 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk='aa' (192 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (98 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (71 milliseconds)
[info] - 2.3: getPartitionsByFilter: 20170101=ds (104 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 and h=2 (245 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (62 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 or ds=20170102 (157 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (120 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (67 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (72 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (64 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (119 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (54 milliseconds)
[info] - 2.3: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (206 milliseconds)
[info] - 2.3: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (165 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (249 milliseconds)
06:11:10.827 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[info] - 2.3: create client with sharesHadoopClasses = false (1 second, 559 milliseconds)
[info] HivePartitionFilteringSuite(3.0):
Hive Session ID = 0dd6ebb1-3bf7-4190-99f2-295064f5d737
06:11:13.310 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:13.982 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:11:15.112 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:11:16.020 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.022 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.023 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.023 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.024 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.024 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.612 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.613 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.613 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.614 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.614 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:16.614 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:19.793 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.0.0
06:11:19.794 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.0.0, comment = Set by MetaStore jenkins@192.168.10.24
06:11:19.978 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database hive.default, returning NoSuchObjectException
06:11:20.667 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:11:20.671 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:20.829 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
Hive Session ID = 96d695fd-5519-4391-a55d-a11c58e9929f
06:11:23.819 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:24.405 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:11:25.537 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:11:26.502 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:26.503 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:26.504 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:26.504 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:26.505 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:26.505 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:27.125 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:27.126 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:27.127 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:27.127 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:27.127 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:27.127 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:28.375 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.0.0
06:11:28.375 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.0.0, comment = Set by MetaStore jenkins@192.168.10.24
06:11:30.415 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:11:30.419 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:30.523 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:32.041 WARN org.apache.spark.sql.hive.client.Shim_v3_0: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:810)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:437)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:355)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:277)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:581)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3841)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:3287)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$600(ObjectStore.java:244)
	at org.apache.hadoop.hive.metastore.ObjectStore$8.getJdoResult(ObjectStore.java:3771)
	at org.apache.hadoop.hive.metastore.ObjectStore$8.getJdoResult(ObjectStore.java:3755)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3498)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3773)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:3428)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
	at com.sun.proxy.$Proxy416.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5766)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at com.sun.proxy.$Proxy418.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1433)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1427)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
	at com.sun.proxy.$Proxy419.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:3061)
	... 68 more
[info] - 3.0: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (10 seconds, 591 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds<=>20170101 (221 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=20170101 (233 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (180 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk='aa' (171 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (66 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (67 milliseconds)
[info] - 3.0: getPartitionsByFilter: 20170101=ds (116 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=20170101 and h=2 (202 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (60 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=20170101 or ds=20170102 (160 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (115 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (84 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (61 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (59 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (130 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (49 milliseconds)
[info] - 3.0: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (163 milliseconds)
[info] - 3.0: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (175 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (296 milliseconds)
06:11:36.075 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hive Session ID = 07db5f2e-a22c-4cff-ab36-3668db8d3e46
[info] - 3.0: create client with sharesHadoopClasses = false (1 second, 482 milliseconds)
[info] HivePartitionFilteringSuite(3.1):
Hive Session ID = 8637a5b2-00f1-4e9f-9657-084b82ae463e
06:11:38.453 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:39.054 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:11:40.236 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:11:41.161 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.162 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.163 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.163 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.164 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.164 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.760 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.761 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.762 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.762 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.762 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:41.762 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:44.884 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0
06:11:44.884 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore jenkins@192.168.10.24
06:11:45.098 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database hive.default, returning NoSuchObjectException
06:11:45.626 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:11:45.630 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:45.736 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
Hive Session ID = 39530296-153a-4593-9dcf-ebe34bcbfcd4
06:11:48.436 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:48.987 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:11:50.208 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:11:51.143 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.145 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.145 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.146 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.146 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.146 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.745 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.746 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.746 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.746 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.747 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:51.747 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:11:52.903 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0
06:11:52.903 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore jenkins@192.168.10.24
06:11:54.064 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:11:54.068 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:54.163 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:11:56.428 WARN org.apache.spark.sql.hive.client.Shim_v3_1: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:810)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:437)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:355)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:277)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:581)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3929)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:3375)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$800(ObjectStore.java:247)
	at org.apache.hadoop.hive.metastore.ObjectStore$11.getJdoResult(ObjectStore.java:3859)
	at org.apache.hadoop.hive.metastore.ObjectStore$11.getJdoResult(ObjectStore.java:3843)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3586)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3861)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:3516)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
	at com.sun.proxy.$Proxy442.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5883)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at com.sun.proxy.$Proxy444.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1444)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1438)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
	at com.sun.proxy.$Proxy445.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:3139)
	... 68 more
[info] - 3.1: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (10 seconds, 118 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds<=>20170101 (229 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=20170101 (240 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (189 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk='aa' (182 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (78 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (66 milliseconds)
[info] - 3.1: getPartitionsByFilter: 20170101=ds (110 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=20170101 and h=2 (208 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (70 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=20170101 or ds=20170102 (170 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (117 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (63 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (81 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (61 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (157 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (69 milliseconds)
[info] - 3.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (192 milliseconds)
[info] - 3.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (188 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (306 milliseconds)
06:12:00.436 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hive Session ID = 83adbb3a-2599-45cc-a112-f56007734220
[info] - 3.1: create client with sharesHadoopClasses = false (1 second, 57 milliseconds)
[info] ErrorPositionSuite:
[info] - ambiguous attribute reference 1 (28 milliseconds)
[info] - ambiguous attribute reference 2 (8 milliseconds)
[info] - ambiguous attribute reference 3 (2 milliseconds)
06:12:01.841 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/src specified for non-external table:src
[info] - unresolved attribute 1 (849 milliseconds)
[info] - unresolved attribute 2 (3 milliseconds)
[info] - unresolved attribute 3 (4 milliseconds)
[info] - unresolved attribute 4 (4 milliseconds)
[info] - unresolved attribute 5 (3 milliseconds)
[info] - unresolved attribute 6 (6 milliseconds)
[info] - unresolved attribute 7 (9 milliseconds)
[info] - multi-char unresolved attribute (4 milliseconds)
[info] - unresolved attribute group by (13 milliseconds)
[info] - unresolved attribute order by (13 milliseconds)
[info] - unresolved attribute where (7 milliseconds)
[info] - unresolved attribute backticks (8 milliseconds)
[info] - parse error (28 milliseconds)
[info] - bad relation (5 milliseconds)
[info] - other expressions !!! IGNORED !!!
06:12:04.434 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/src does not exist; Force to delete it.
06:12:04.434 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/src
06:12:04.571 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:12:04.571 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:12:04.574 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:12:04.916 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:12:04.917 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:12:04.920 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveShowCreateTableSuite:
06:12:05.102 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:12:05.535 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with user specified schema (722 milliseconds)
06:12:06.127 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:12:06.320 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table CTAS (717 milliseconds)
06:12:06.898 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:12:07.455 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - partitioned data source table (1 second, 183 milliseconds)
06:12:07.993 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:12:08.165 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - bucketed data source table (686 milliseconds)
06:12:08.758 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:12:09.100 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - partitioned bucketed data source table (940 milliseconds)
06:12:09.591 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:12:09.746 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with a comment (644 milliseconds)
06:12:10.217 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:12:10.376 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with table properties (581 milliseconds)
[info] - data source table using Dataset API (1 second, 371 milliseconds)
[info] - temp view (32 milliseconds)
06:12:11.976 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t1` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:12:12.280 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t1` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - SPARK-24911: keep quotes for nested fields (530 milliseconds)
[info] - view (499 milliseconds)
[info] - view with output columns (489 milliseconds)
[info] - view with table comment and properties (484 milliseconds)
06:12:13.977 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
06:12:14.237 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - simple hive table (453 milliseconds)
[info] - simple external hive table (283 milliseconds)
06:12:14.712 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
06:12:14.940 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - partitioned hive table (461 milliseconds)
06:12:15.173 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
06:12:15.401 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - hive table with explicit storage info (457 milliseconds)
06:12:15.628 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
06:12:15.909 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - hive table with STORED AS clause (518 milliseconds)
06:12:16.147 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
06:12:16.400 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - hive table with serde info (539 milliseconds)
06:12:16.692 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
06:12:16.949 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - hive bucketing is supported (482 milliseconds)
06:12:17.170 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - hive partitioned view is not supported (411 milliseconds)
06:12:17.584 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
06:12:17.880 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - SPARK-24911: keep quotes for nested fields in hive (545 milliseconds)
06:12:18.126 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - simple hive table in Spark DDL (495 milliseconds)
[info] - show create table as serde can't work on data source table (258 milliseconds)
[info] - simple external hive table in Spark DDL (271 milliseconds)
06:12:19.156 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - hive table with STORED AS clause in Spark DDL (495 milliseconds)
06:12:19.647 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - hive table with nested fields with STORED AS clause in Spark DDL (556 milliseconds)
06:12:20.205 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - hive table with unsupported fileformat in Spark DDL (244 milliseconds)
06:12:20.448 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - hive table with serde info in Spark DDL (473 milliseconds)
06:12:20.922 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - partitioned, bucketed hive table in Spark DDL (490 milliseconds)
06:12:21.413 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-3dbc0507-8acb-4759-954c-342de1b47afe/t1 specified for non-external table:t1
[info] - show create table for transactional hive table (273 milliseconds)
06:12:21.792 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:12:21.792 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:12:21.792 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:12:21.904 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:12:21.904 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:12:21.904 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] BigDataBenchmarkSuite:
[info] - No data files found for BigDataBenchmark tests. !!! IGNORED !!!
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.testUDAF started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.saveTableAndQueryIt started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 2.372s
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveTableAndQueryIt started
06:12:24.708 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.557s
[info] ScalaTest
[info] Run completed in 2 hours, 56 minutes, 33 seconds.
[info] Total number of tests run: 3637
[info] Suites: completed 131, aborted 0
[info] Tests: succeeded 3637, failed 0, canceled 0, ignored 598, pending 0
[info] All tests passed.
[info] Passed: Total 3640, Failed 0, Errors 0, Passed 3640, Ignored 598
[error] (streaming/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 10689 s, completed May 15, 2020 6:13:03 AM
[error] running /home/jenkins/workspace/NewSparkPullRequestBuilder@2/build/sbt -Phadoop-2.7 -Phive-2.3 -Pyarn -Phadoop-cloud -Phive -Pmesos -Pspark-ganglia-lgpl -Pkubernetes -Pkinesis-asl -Phive-thriftserver -Dtest.exclude.tags=org.apache.spark.tags.ExtendedYarnTest test ; received return code 1
Attempting to post to Github...
 > Post successful.
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE