Passed

pyspark.streaming.tests.CheckpointTests.test_transform_function_serializer_failure (from pyspark.streaming.tests.CheckpointTests-20210414114425)

Took 1.1 sec.

Standard Output

	

Standard Error

Traceback (most recent call last):
  File "/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/python/pyspark/serializers.py", line 597, in dumps
    return cloudpickle.dumps(obj, 2)
  File "/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/python/pyspark/cloudpickle.py", line 863, in dumps
    cp.dump(obj)
  File "/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/python/pyspark/cloudpickle.py", line 260, in dump
    return Pickler.dump(self, obj)
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 409, in dump
    self.save(obj)
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 736, in save_tuple
    save(element)
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/python/pyspark/cloudpickle.py", line 406, in save_function
    self.save_function_tuple(obj)
  File "/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/python/pyspark/cloudpickle.py", line 549, in save_function_tuple
    save(state)
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 821, in save_dict
    self._batch_setitems(obj.items())
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 847, in _batch_setitems
    save(v)
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 781, in save_list
    self._batch_appends(obj)
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 808, in _batch_appends
    save(tmp[0])
  File "/home/anaconda/envs/py3k/lib/python3.6/pickle.py", line 496, in save
    rv = reduce(self.proto)
  File "/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/python/pyspark/context.py", line 339, in __getnewargs__
    "It appears that you are attempting to reference SparkContext from a broadcast "
Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.