Examples

All examples can be found on GitHub (https://github.com/NorthConcepts/DataPipeline-Examples).

  1. Buffer Records by Time Period or Count
  2. Capture Data Not Joined in a Lookup
  3. Compile and Run a Job
  4. Continue After an Error
  5. Convert a Single Source DataReader into Many
  6. Convert JSON to CSV
  7. Debug my Code
  8. Extract Bigrams, Trigrams, and Ngrams
  9. Generate a PDF
  10. Generate a Word Document
  11. Group Records by TimePeriod or Count
  12. Handle Exceptions
  13. Log Diagnostic Information
  14. Measure Data being Read and Written
  15. Measure Data Lineage Performance
  16. Measure Performance of Reader and Writer
  17. Obtain Statistics
  18. Open and Close Several Data Readers and Data Writers at Once
  19. Pipe a Writer to a Reader
  20. Profile Performance
  21. Read BigDecimal and BigInteger from an Excel file
  22. Read a Bloomberg Message File
  23. Read a CSV File
  24. Read a Fixed-width File / Fixed-length Record File
  25. Read a JSON Stream
  26. Read a Parquet File
  27. Read a Patient File
  28. Read an Orc File
  29. Read Selected Fields from an Orc File
  30. Read a Simple JSON File
  31. Read a Simple XML File
  32. Read a Web Server Log
  33. Read an Avro File
  34. Read an XML File
  35. Read an XML File (2)
  36. Read and Write to an EventBus
  37. Read Big Decimal In JSON
  38. Read Emails
  39. Read from Amazon S3
  40. Read Parquet from Amazon S3
  41. Read Parquet from Amazon S3 using a Temporary File
  42. Read An Orc file from Amazon S3
  43. Read An Orc file from Amazon S3 using a Temporary File
  44. Read from a Database
  45. Read from an Excel File
  46. Read from Gmail
  47. Read from Java beans
  48. Read from JMS Queue
  49. Read from JMS Topic
  50. Read from Memory
  51. Read from MongoDB
  52. Read Google Analytics goal conversions
  53. Read Google Analytics Social Interactions
  54. Read Google Analytics views
  55. Read Google Calendar
  56. Read Google Contacts
  57. Read Google Gmail Messages
  58. Read JSON Records From File
  59. Read Selected Fields from a Parquet File
  60. Read using TimedReader
  61. Read XML Records From File
  62. Search for a Record
  63. Search Twitter for Tweets
  64. Read Tweets from a User's Timeline Using v2 API
  65. Search Followers of a Twitter User Using v2 API
  66. Search Twitter for Tweets Using v2 API
  67. Serialize and Deserialize Data
  68. Serialize and Deserialize Records
  69. Throttle Data being Read
  70. Throttle Data being Written
  71. Use Multi Threading in a Single Job
  72. Use a Retrying Reader
  73. Use a Retrying Writer
  74. Use Data Lineage with CsvReader
  75. Use Data Lineage with ExcelReader
  76. Use Data Lineage with FixedWidthReader
  77. Use Data Lineage with JdbcReader
  78. Use Data Lineage with Lookup
  79. Use Data Lineage with ParquetReader
  80. Use Data Lineage with OrcReader
  81. Use Streaming Excel for Reading
  82. Use Streaming Excel for Writing
  83. Upsert Records to a Database Using Insert and Update
  84. Upsert Records to a Database Using Merge
  85. Upsert Records to MySql or MariaDB
  86. Upsert Records to Oracle
  87. Upsert Records to PostgreSql
  88. Upsert Records to Sybase
  89. Upsert Variable Field Records
  90. Write a CSV File to Database (1)
  91. Write a CSV File to Database (2)
  92. Write a CSV File to Fixed Width
  93. Write a Parquet File
  94. Write an Orc File
  95. Compress a Parquet File
  96. Compress an Orc File
  97. Write a Simple JSON File
  98. Write a Simple XML File
  99. Write a Sequence of Files by Record Count
  100. Write a Sequence of Files by Elapsed Time
  101. Write an Avro File
  102. Write an XML File Programmatically
  103. Write an XML File using FreeMarker Templates
  104. Write Arrays and Nested Records to XML using FreeMarker Templates
  105. Write CSV To XML Using FreeMarker Templates
  106. Write HTML using FreeMarker Templates
  107. Write Key-Value Fields to MapWriter
  108. Write to Amazon S3 Using Multipart Streaming
  109. Write Excel to Amazon S3
  110. Write Parquet to Amazon S3
  111. Write Parquet to Amazon S3 Using a Temporary File
  112. Write An Orc file to Amazon S3
  113. Write An Orc file to Amazon S3 using a Temporary File
  114. Write to a Database Using Custom Jdbc Insert Strategy
  115. Write to a Database Using Generic Upsert Strategy
  116. Write to a Database Using Merge Upsert Strategy
  117. Write to a Database Using Merge Upsert Strategy with Batch
  118. Write to a Database Using Multiple Connections
  119. Write to a Database Using Multi Row Prepared Statement Insert Strategy
  120. Write to a Database Using Multi Row Statement Insert Strategy
  121. Write to Excel
  122. Write to JMS Queue
  123. Write to JMS Topic
  124. Write to JSON Stream (simple)
  125. Write to JSON Stream Programmatically
  126. Write to Memory
  127. Write to MongoDB
  128. Write to Several Data Writers at Once
  129. Write to the Console
  130. Write to XML Stream (Simple)
  1. Add a Decision Table to a Pipeline
  2. Add a Decision Tree to a Pipeline
  3. Add Calculated Fields to a Decision Table
  4. Add Calculated Fields to a Decision Tree
  5. Conditionally map Data from Source to Target
  6. Conditionally map DataField from Source to Target
  7. Create Custom Pipeline Action
  8. Create Custom Pipeline Input
  9. Create Custom Pipeline Output
  10. Create a JsonPipelineInput Programmatically
  11. Create a JsonPipelineInput Declaratively from Json
  12. Create a JsonPipelineInput Declaratively From Xml
  13. Create an XmlPipelineInput Programmatically
  14. Create an XmlPipelineInput Declaratively from Json
  15. Create an XmlPipelineInput Declaratively From Xml
  16. Evaluate a Decision Table
  17. Evaluate a Decision Table with Lookup
  18. Evaluate a Decision Tree
  19. Evaluate a Decision Tree with Lookup
  20. Execute an Action in a Decision Table
  21. Execute an Action in a Decision Tree
  22. Filter Columns with All Null Values
  23. Generate Java Beans from a Database
  24. Map Data with Rule Based Validation
  25. Map Data with Schema Based Validation
  26. Map Data from Source to Target
  27. Map Data from Source to Target in a Pipeline
  28. Map Data from Source to Target in a Pipeline with Validation
  29. Capture Data that Failed Data Mapping Validation
  30. Map Data from Source to Target with Lookup
  31. Map Records using Schema
  32. Map Records in Pipeline using Schema
  33. Declaratively Map Data
  34. Declaratively Map XML Files
  35. Declaratively Map Data Using Positions
  36. Declaratively Map Data with Source and Target Schema
  37. Declaratively Transform Records using Schema
  38. Declaratively Set Default Values for Missing Data
  39. Read from CSV And Writer to Excel
  40. Save and Load DataMapping to XML
  41. Save and Load DecisionTable to XML
  42. Save and Load DecisionTree to XML
  43. Load Snapshot of Dataset
  44. Save and Restore Pipeline from JSON
  45. Show Column Statistics
  46. Show the Columns and Tables of a Schema
  47. Show Unique Values in Column
  48. Build DataMappingPipeline Programmatically
  49. Build DataMappingPipeline Declaratively from Json
  50. Build DataMappingPipeline Declaratively from Xml
  51. Use an EventBus
  52. Use an EventBus in a Pipeline
  53. Use SchemaFilter to Validate Records in a Pipeline
  54. Validate a Field
  55. Validate a Value
  56. Validate Record Fields
  57. Validate Records using Fields and Rules
  58. Validate Records using Rules
  59. Transform Records using Schema
  60. Set Default Values for Missing Data
Mobile Analytics