spark.timeseries

TimeSeriesSpark

object TimeSeriesSpark extends AnyRef

Time Series Spark main class. Implements a generic time series data analysis API using RunRDD and scala.Arrays of values. The class includes functions for setting the caching method, creating a Spark Context, splitting up data into scala.Arrays of numbers, and splitting data files into buckets based on their common properties.

For log data examples, see spark.timeseries.BucketLogsByHour.

Energy measurement specific example functions, such as the IdleEnergyArrayDetector class are deprecated. For current implementations, see IdleEnergyArrayDetector and the spark.timeseries.EnergyOps example class.

A secondary API using spark.timeseries.MeasurementRunRDD restricts to the use of scala.Tuple3. This is deprecated.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. Hide All
  2. Show all
  1. TimeSeriesSpark
  2. AnyRef
  3. Any
Visibility
  1. Public
  2. All

Type Members

  1. class IdleEnergyTupleDetector extends MeasurementRunDetector[(Double, Double, Double)]

    Detect periods of activity from a data file of scala.Tuple3 of scala.Doubles where each Tuple consists of time, current, and voltage values.

Value Members

  1. def != (arg0: AnyRef): Boolean

    Attributes
    final
    Definition Classes
    AnyRef
  2. def != (arg0: Any): Boolean

    Attributes
    final
    Definition Classes
    Any
  3. def ## (): Int

    Attributes
    final
    Definition Classes
    AnyRef → Any
  4. def == (arg0: AnyRef): Boolean

    Attributes
    final
    Definition Classes
    AnyRef
  5. def == (arg0: Any): Boolean

    Attributes
    final
    Definition Classes
    Any
  6. def asInstanceOf [T0] : T0

    Attributes
    final
    Definition Classes
    Any
  7. def bucketLogsByHour (sc: SparkContext, fileName: String, bucketMinutes: Int = 60): BucketRDD[Array[String]]

    Split text file lines into 8 fields, separated by spaces, and bucket the lines by hour, assuming that messages are sorted by date and time, the 6th field is a numeric date in the month and the 7th field is a time in the form HH:MM:SS.

    Split text file lines into 8 fields, separated by spaces, and bucket the lines by hour, assuming that messages are sorted by date and time, the 6th field is a numeric date in the month and the 7th field is a time in the form HH:MM:SS.

    sc

    The SparkContext

    fileName

    The path to the file with all the log messages.

    bucketMinutes

    The amount of minutes of logs stored in each bucket. Currently unused.

  8. def clone (): AnyRef

    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  9. def eq (arg0: AnyRef): Boolean

    Attributes
    final
    Definition Classes
    AnyRef
  10. def equals (arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  11. def finalize (): Unit

    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  12. def genericMapper (s: String, separator: String = ","): Array[Double]

    Map a line of text of comma-separated numbers into an scala.Array of scala.Doubles.

    Map a line of text of comma-separated numbers into an scala.Array of scala.Doubles.

    s

    the line of text to split.

    separator

    The separator (regexp). Default ","

  13. def getClass (): java.lang.Class[_]

    Attributes
    final
    Definition Classes
    AnyRef → Any
  14. def hashCode (): Int

    Definition Classes
    AnyRef → Any
  15. def init (master: String, cache: String = null, sparkName: String = "TimeSeriesSpark"): SparkContext

    Create the SparkContext and set caching class.

    Create the SparkContext and set caching class.

    master

    the Mesos Master, or local[number of threads] for running Spark locally.

    cache

    The name of the caching system to use. One of "kryo" or "bounded". Other values are treated as leaving the caching scheme unchanged. The null value is allowed. Default: null.

    sparkName

    The name of the application for Spark. Default: "TimeSeriesSpark".

    returns

    A new SparkContext

  16. def isInstanceOf [T0] : Boolean

    Attributes
    final
    Definition Classes
    Any
  17. def ne (arg0: AnyRef): Boolean

    Attributes
    final
    Definition Classes
    AnyRef
  18. def notify (): Unit

    Attributes
    final
    Definition Classes
    AnyRef
  19. def notifyAll (): Unit

    Attributes
    final
    Definition Classes
    AnyRef
  20. def setCache (cache: String = null): Unit

    Set the caching class.

    Set the caching class. Called by init(). Use this method if you do not need a new SparkContext (such as from the spark scala interpreter).

    cache

    The name of the caching system to use. One of "kryo" or "bounded". Other values are treated as leaving the caching scheme unchanged. The null value is allowed. Default: null.

  21. def synchronized [T0] (arg0: ⇒ T0): T0

    Attributes
    final
    Definition Classes
    AnyRef
  22. def toString (): String

    Definition Classes
    AnyRef → Any
  23. def wait (): Unit

    Attributes
    final
    Definition Classes
    AnyRef
    Annotations
    @throws()
  24. def wait (arg0: Long, arg1: Int): Unit

    Attributes
    final
    Definition Classes
    AnyRef
    Annotations
    @throws()
  25. def wait (arg0: Long): Unit

    Attributes
    final
    Definition Classes
    AnyRef
    Annotations
    @throws()

Deprecated Value Members

  1. def energymapper (time: Double, amp: Double, volt: Double): Double

    Map a current and voltage pair into their product.

    Map a current and voltage pair into their product.

    time

    the time, in s

    amp

    the current, in mA

    volt

    the voltage, in volts

    returns

    amp*volt

    Annotations
    @deprecated
    Deprecated

    scala.this.deprecated.init$default$1

  2. def tuple3Mapper (s: String): (Double, Double, Double)

    Map a line of text of comma-separated numbers into a (Double,Double,Double) ignoring the first on each line (measurement number).

    Map a line of text of comma-separated numbers into a (Double,Double,Double) ignoring the first on each line (measurement number).

    s

    the line of text to split.

    Annotations
    @deprecated
    Deprecated

    scala.this.deprecated.init$default$1

Inherited from AnyRef

Inherited from Any