FailedConsole Output

Skipping 21,200 KB.. Full Log
[info] - 2.1: getPartitionsByFilter: chunk in ('ab', 'ba') and ((cast(ds as string)>'20170102') (89 milliseconds)
[info] HiveClientSuite(2.2):
13:24:03.387 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
13:24:03.387 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@127.0.1.1
13:24:03.400 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
13:24:05.253 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
13:24:20.692 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
13:24:20.693 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@127.0.1.1
13:24:22.958 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
13:24:35.244 WARN org.apache.spark.sql.hive.client.Shim_v2_2: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:761)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:690)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:688)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:688)
	at org.apache.spark.sql.hive.client.HiveClientSuite$$anonfun$23.apply(HiveClientSuite.scala:86)
	at org.apache.spark.sql.hive.client.HiveClientSuite$$anonfun$23.apply(HiveClientSuite.scala:84)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54)
	at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:54)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite$class.run(Suite.scala:1147)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:54)
	at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1210)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1257)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1255)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1255)
	at org.apache.spark.sql.hive.client.HiveClientSuites.runNestedSuites(HiveClientSuites.scala:24)
	at org.scalatest.Suite$class.run(Suite.scala:1144)
	at org.apache.spark.sql.hive.client.HiveClientSuites.run(HiveClientSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:439)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:356)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:278)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:583)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3030)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2582)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:176)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2963)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2947)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2772)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:2965)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2704)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy204.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:4821)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy205.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1228)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
	at com.sun.proxy.$Proxy206.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:2577)
	... 65 more
[info] - 2.2: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (21 seconds, 481 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds<=>20170101 (481 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 (278 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (174 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk='aa' (203 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (117 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (79 milliseconds)
[info] - 2.2: getPartitionsByFilter: 20170101=ds (112 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 and h=10 (203 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long)=20170101L and h=10 (77 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 or ds=20170102 (214 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (124 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (72 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (74 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (74 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (157 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (68 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=8) or (ds=20170102 and h<8) (198 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=8) or (ds=20170102 and h<(7+1)) (176 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=8) or (ds=20170102 and h<8)) (255 milliseconds)
13:24:41.946 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13:24:47.091 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
13:24:47.092 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@127.0.1.1
13:24:47.106 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - 2.2: create client with sharesHadoopClasses = false (7 seconds, 842 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') and ((cast(ds as string)>'20170102') (83 milliseconds)
[info] HiveClientSuite(2.3):
13:24:53.148 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
13:24:53.148 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
13:24:58.232 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
13:24:58.232 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore jenkins@127.0.1.1
13:24:58.256 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
13:25:00.033 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
13:25:00.226 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
13:25:00.226 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
13:25:12.721 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
13:25:12.721 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
13:25:16.200 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
13:25:16.200 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore jenkins@127.0.1.1
13:25:18.207 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
13:25:18.374 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
13:25:18.375 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
13:25:31.146 WARN org.apache.spark.sql.hive.client.Shim_v2_3: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:761)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:690)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:688)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:688)
	at org.apache.spark.sql.hive.client.HiveClientSuite$$anonfun$23.apply(HiveClientSuite.scala:86)
	at org.apache.spark.sql.hive.client.HiveClientSuite$$anonfun$23.apply(HiveClientSuite.scala:84)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54)
	at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:54)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite$class.run(Suite.scala:1147)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:54)
	at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1210)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1257)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1255)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1255)
	at org.apache.spark.sql.hive.client.HiveClientSuites.runNestedSuites(HiveClientSuites.scala:24)
	at org.scalatest.Suite$class.run(Suite.scala:1144)
	at org.apache.spark.sql.hive.client.HiveClientSuites.run(HiveClientSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:439)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:356)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:278)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:583)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3069)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2522)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:180)
	at org.apache.hadoop.hive.metastore.ObjectStore$7.getJdoResult(ObjectStore.java:3002)
	at org.apache.hadoop.hive.metastore.ObjectStore$7.getJdoResult(ObjectStore.java:2986)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2728)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3004)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2660)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy233.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5084)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy234.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1232)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
	at com.sun.proxy.$Proxy235.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:2679)
	... 65 more
[info] - 2.3: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (22 seconds, 120 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds<=>20170101 (510 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 (305 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (198 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk='aa' (199 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (130 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (84 milliseconds)
[info] - 2.3: getPartitionsByFilter: 20170101=ds (124 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 and h=10 (217 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long)=20170101L and h=10 (74 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 or ds=20170102 (243 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (171 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (83 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (94 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (82 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (158 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (65 milliseconds)
[info] - 2.3: getPartitionsByFilter: (ds=20170101 and h>=8) or (ds=20170102 and h<8) (190 milliseconds)
[info] - 2.3: getPartitionsByFilter: (ds=20170101 and h>=8) or (ds=20170102 and h<(7+1)) (194 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=8) or (ds=20170102 and h<8)) (281 milliseconds)
13:25:37.452 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[info] - 2.3: create client with sharesHadoopClasses = false (1 second, 221 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') and ((cast(ds as string)>'20170102') (78 milliseconds)
[info] ParquetMetastoreSuite:
13:25:46.449 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_parquet specified for non-external table:test_parquet
[info] - ordering of the partitioning columns partitioned_parquet (475 milliseconds)
[info] - project the partitioning column partitioned_parquet (737 milliseconds)
[info] - project partitioning and non-partitioning columns partitioned_parquet (654 milliseconds)
[info] - simple count partitioned_parquet (218 milliseconds)
[info] - pruned count partitioned_parquet (243 milliseconds)
[info] - non-existent partition partitioned_parquet (162 milliseconds)
[info] - multi-partition pruned count partitioned_parquet (251 milliseconds)
[info] - non-partition predicates partitioned_parquet (249 milliseconds)
[info] - sum partitioned_parquet (269 milliseconds)
[info] - hive udfs partitioned_parquet (326 milliseconds)
[info] - ordering of the partitioning columns partitioned_parquet_with_key (320 milliseconds)
[info] - project the partitioning column partitioned_parquet_with_key (463 milliseconds)
[info] - project partitioning and non-partitioning columns partitioned_parquet_with_key (503 milliseconds)
[info] - simple count partitioned_parquet_with_key (165 milliseconds)
[info] - pruned count partitioned_parquet_with_key (179 milliseconds)
[info] - non-existent partition partitioned_parquet_with_key (139 milliseconds)
[info] - multi-partition pruned count partitioned_parquet_with_key (230 milliseconds)
[info] - non-partition predicates partitioned_parquet_with_key (231 milliseconds)
[info] - sum partitioned_parquet_with_key (186 milliseconds)
[info] - hive udfs partitioned_parquet_with_key (207 milliseconds)
[info] - ordering of the partitioning columns partitioned_parquet_with_complextypes (277 milliseconds)
[info] - project the partitioning column partitioned_parquet_with_complextypes (422 milliseconds)
[info] - project partitioning and non-partitioning columns partitioned_parquet_with_complextypes (487 milliseconds)
[info] - simple count partitioned_parquet_with_complextypes (158 milliseconds)
[info] - pruned count partitioned_parquet_with_complextypes (160 milliseconds)
[info] - non-existent partition partitioned_parquet_with_complextypes (135 milliseconds)
[info] - multi-partition pruned count partitioned_parquet_with_complextypes (206 milliseconds)
[info] - non-partition predicates partitioned_parquet_with_complextypes (211 milliseconds)
[info] - sum partitioned_parquet_with_complextypes (189 milliseconds)
[info] - hive udfs partitioned_parquet_with_complextypes (241 milliseconds)
[info] - ordering of the partitioning columns partitioned_parquet_with_key_and_complextypes (323 milliseconds)
[info] - project the partitioning column partitioned_parquet_with_key_and_complextypes (517 milliseconds)
[info] - project partitioning and non-partitioning columns partitioned_parquet_with_key_and_complextypes (572 milliseconds)
[info] - simple count partitioned_parquet_with_key_and_complextypes (158 milliseconds)
[info] - pruned count partitioned_parquet_with_key_and_complextypes (169 milliseconds)
[info] - non-existent partition partitioned_parquet_with_key_and_complextypes (152 milliseconds)
[info] - multi-partition pruned count partitioned_parquet_with_key_and_complextypes (202 milliseconds)
[info] - non-partition predicates partitioned_parquet_with_key_and_complextypes (209 milliseconds)
[info] - sum partitioned_parquet_with_key_and_complextypes (177 milliseconds)
[info] - hive udfs partitioned_parquet_with_key_and_complextypes (228 milliseconds)
[info] - SPARK-5775 read struct from partitioned_parquet_with_key_and_complextypes (204 milliseconds)
[info] - SPARK-5775 read array from partitioned_parquet_with_key_and_complextypes (191 milliseconds)
[info] - SPARK-5775 read struct from partitioned_parquet_with_complextypes (131 milliseconds)
[info] - SPARK-5775 read array from partitioned_parquet_with_complextypes (146 milliseconds)
[info] - non-part select(*) (178 milliseconds)
[info] - conversion is working (30 milliseconds)
[info] - scan an empty parquet table (100 milliseconds)
[info] - scan an empty parquet table with upper case (112 milliseconds)
13:26:01.216 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_insert_parquet specified for non-external table:test_insert_parquet
13:26:02.132 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_insert_parquet specified for non-external table:test_insert_parquet
[info] - insert into an empty parquet table (1 second, 555 milliseconds)
13:26:02.788 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_parquet_ctas specified for non-external table:test_parquet_ctas
[info] - scan a parquet table created through a CTAS statement (421 milliseconds)
13:26:03.187 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_insert_parquet specified for non-external table:test_insert_parquet
[info] - MetastoreRelation in InsertIntoTable will be converted (453 milliseconds)
13:26:03.646 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_insert_parquet specified for non-external table:test_insert_parquet
[info] - MetastoreRelation in InsertIntoHiveTable will be converted (505 milliseconds)
13:26:04.145 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/ms_convert specified for non-external table:ms_convert
[info] - SPARK-6450 regression test (147 milliseconds)
13:26:04.298 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/nonpartitioned specified for non-external table:nonpartitioned
[info] - SPARK-7749: non-partitioned metastore Parquet table lookup should use cached relation (178 milliseconds)
13:26:04.472 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/partitioned specified for non-external table:partitioned
[info] - SPARK-7749: partitioned metastore Parquet table lookup should use cached relation (128 milliseconds)
13:26:04.600 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/partitioned specified for non-external table:partitioned
[info] - SPARK-15968: nonempty partitioned metastore Parquet table lookup should use cached relation (687 milliseconds)
13:26:05.302 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_insert_parquet specified for non-external table:test_insert_parquet
13:26:05.713 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_parquet_partitioned_cache_test specified for non-external table:test_parquet_partitioned_cache_test
[info] - Caching converted data source Parquet Relations (1 second, 850 milliseconds)
13:26:07.142 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_added_partitions specified for non-external table:test_added_partitions
[info] - SPARK-15248: explicitly added partitions should be readable (2 seconds, 319 milliseconds)
13:26:09.681 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_added_partitions specified for non-external table:test_added_partitions
[info] - Explicitly added partitions should be readable after load (1 second, 170 milliseconds)
13:26:10.852 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/tab specified for non-external table:tab
[info] - Non-partitioned table readable after load (701 milliseconds)
[info] - self-join (250 milliseconds)
[info] ParquetSourceSuite:
13:26:20.293 WARN org.apache.spark.sql.execution.datasources.DataSource: Found duplicate column(s) in the data schema and the partition schema: `p`;
13:26:20.408 WARN org.apache.spark.sql.execution.datasources.DataSource: Found duplicate column(s) in the data schema and the partition schema: `p`;
[info] - ordering of the partitioning columns partitioned_parquet (214 milliseconds)
[info] - project the partitioning column partitioned_parquet (392 milliseconds)
[info] - project partitioning and non-partitioning columns partitioned_parquet (400 milliseconds)
[info] - simple count partitioned_parquet (172 milliseconds)
[info] - pruned count partitioned_parquet (143 milliseconds)
[info] - non-existent partition partitioned_parquet (100 milliseconds)
[info] - multi-partition pruned count partitioned_parquet (117 milliseconds)
[info] - non-partition predicates partitioned_parquet (222 milliseconds)
[info] - sum partitioned_parquet (135 milliseconds)
[info] - hive udfs partitioned_parquet (159 milliseconds)
[info] - ordering of the partitioning columns partitioned_parquet_with_key (186 milliseconds)
[info] - project the partitioning column partitioned_parquet_with_key (366 milliseconds)
[info] - project partitioning and non-partitioning columns partitioned_parquet_with_key (451 milliseconds)
[info] - simple count partitioned_parquet_with_key (166 milliseconds)
[info] - pruned count partitioned_parquet_with_key (150 milliseconds)
[info] - non-existent partition partitioned_parquet_with_key (138 milliseconds)
[info] - multi-partition pruned count partitioned_parquet_with_key (183 milliseconds)
[info] - non-partition predicates partitioned_parquet_with_key (247 milliseconds)
[info] - sum partitioned_parquet_with_key (296 milliseconds)
[info] - hive udfs partitioned_parquet_with_key (188 milliseconds)
[info] - ordering of the partitioning columns partitioned_parquet_with_complextypes (171 milliseconds)
[info] - project the partitioning column partitioned_parquet_with_complextypes (388 milliseconds)
[info] - project partitioning and non-partitioning columns partitioned_parquet_with_complextypes (396 milliseconds)
[info] - simple count partitioned_parquet_with_complextypes (134 milliseconds)
[info] - pruned count partitioned_parquet_with_complextypes (114 milliseconds)
[info] - non-existent partition partitioned_parquet_with_complextypes (104 milliseconds)
[info] - multi-partition pruned count partitioned_parquet_with_complextypes (143 milliseconds)
[info] - non-partition predicates partitioned_parquet_with_complextypes (182 milliseconds)
[info] - sum partitioned_parquet_with_complextypes (142 milliseconds)
[info] - hive udfs partitioned_parquet_with_complextypes (203 milliseconds)
[info] - ordering of the partitioning columns partitioned_parquet_with_key_and_complextypes (194 milliseconds)
[info] - project the partitioning column partitioned_parquet_with_key_and_complextypes (427 milliseconds)
[info] - project partitioning and non-partitioning columns partitioned_parquet_with_key_and_complextypes (447 milliseconds)
[info] - simple count partitioned_parquet_with_key_and_complextypes (150 milliseconds)
[info] - pruned count partitioned_parquet_with_key_and_complextypes (134 milliseconds)
[info] - non-existent partition partitioned_parquet_with_key_and_complextypes (120 milliseconds)
[info] - multi-partition pruned count partitioned_parquet_with_key_and_complextypes (168 milliseconds)
[info] - non-partition predicates partitioned_parquet_with_key_and_complextypes (210 milliseconds)
[info] - sum partitioned_parquet_with_key_and_complextypes (167 milliseconds)
[info] - hive udfs partitioned_parquet_with_key_and_complextypes (168 milliseconds)
[info] - SPARK-5775 read struct from partitioned_parquet_with_key_and_complextypes (140 milliseconds)
[info] - SPARK-5775 read array from partitioned_parquet_with_key_and_complextypes (123 milliseconds)
[info] - SPARK-5775 read struct from partitioned_parquet_with_complextypes (91 milliseconds)
[info] - SPARK-5775 read array from partitioned_parquet_with_complextypes (114 milliseconds)
[info] - non-part select(*) (118 milliseconds)
[info] - SPARK-6016 make sure to use the latest footers (705 milliseconds)
[info] - SPARK-8811: compatibility with array of struct in Hive (735 milliseconds)
13:26:31.143 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_parquet_ctas specified for non-external table:test_parquet_ctas
13:26:31.536 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/test_parquet_ctas specified for non-external table:test_parquet_ctas
[info] - Verify the PARQUET conversion parameter: CONVERT_METASTORE_PARQUET (1 second, 22 milliseconds)
[info] - values in arrays and maps stored in parquet are always nullable (409 milliseconds)
[info] - Aggregation attribute names can't contain special chars " ,;{}()\n\t=" (825 milliseconds)
[info] HiveUDAFSuite:
[info] - built-in Hive UDAF (299 milliseconds)
[info] - customized Hive UDAF (316 milliseconds)
[info] - SPARK-24935: customized Hive UDAF with two aggregation buffers (512 milliseconds)
[info] - call JAVA UDAF (394 milliseconds)
[info] - non-deterministic children expressions of UDAF (36 milliseconds)
13:26:35.163 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/tmp/warehouse-fbf5b7f0-4cee-4b09-8234-18ec7cdab81d/abc specified for non-external table:abc
[info] - SPARK-27907 HiveUDAF with 0 rows throws NPE (1 second, 324 milliseconds)
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveExternalTableAndQueryIt started
13:26:36.989 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
13:26:37.210 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`externaltable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveTableAndQueryIt started
13:26:37.710 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveExternalTableWithSchemaAndQueryIt started
13:26:38.076 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
13:26:38.293 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`externaltable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test run finished: 0 failed, 0 ignored, 3 total, 2.066s
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.testUDAF started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.saveTableAndQueryIt started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 2.14s
[info] ScalaTest
[info] Run completed in 1 hour, 13 minutes, 30 seconds.
[info] Total number of tests run: 3276
[info] Suites: completed 97, aborted 1
[info] Tests: succeeded 3276, failed 0, canceled 0, ignored 596, pending 0
[info] *** 1 SUITE ABORTED ***
[error] Error: Total 3282, Failed 0, Errors 1, Passed 3281, Ignored 596
[error] Error during tests:
[error] 	org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite
[error] (hive/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 4458 s, completed Jun 3, 2021 1:26:57 PM
[error] running /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/sbt -Phadoop-2.6 -Pkubernetes -Pflume -Phive-thriftserver -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pmesos test ; received return code 1
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
[Checks API] No suitable checks publisher found.
Finished: FAILURE