Machine learning and artificial intelligence in general are two of today’s hottest skills. AI and ML conferences provide a place for you to improve your skills, discuss trends, and exchange ideas with other data scientists, developers, and entrepreneurs. Whether you’re new to the world of machine learning, trying to stay up-to-date, or just looking to network, there’s a conference happening for you. This article lists over 50 conferences taking place around the world for you to consider attending.
We’re building on a new tool to help you work faster with Data Pipeline.
This new tool is a web app that lets you interactively transform, filter, and prepare data on-the-fly. It also lets you generate Data Pipeline code based on the actions you perform.
We recently received an email from a Java developer asking how to convert records in a table (like you get in a relational database, CSV, or Excel file) to a composite tree structure. Normally, we’d point to one of Data Pipeline’s XML or JSON data writers, but for good reasons those options didn’t apply here. The developer emailing us needed the hierarchical structures in object form for use in his API calls.
Since we didn’t have a general purpose, table-tree mapper, we built one. We looked at several options, but ultimately decided to add a new operator to the GroupByReader. This not only answered the immediate mapping question, but also allowed him to use the new operator with sliding window aggregation if the need ever arose.
The rest of this blog will walk you through the implementation in case you ever need to add your own custom aggregate operator to Data Pipeline.
ETL is a process for performing data extraction, transformation and loading. The process extracts data from a variety of sources and formats, transforms it into a standard structure, and loads it into a database, file, web service, or other system for analysis, visualization, machine learning, etc.
ETL tools come in a wide variety of shapes. Some run on your desktop or on-premise servers, while others run as SaaS in the cloud. Some are code-based, built on standard programming languages that many developers already know. Others are built on a custom DSL (domain specific language) in an attempt to be more intentional and require less code. Others still are completely graphical, only offering programming interfaces for complex transformations.
What follows is a list of ETL tools for developers already familiar with Java and the JVM (Java Virtual Machine) to clean, validate, filter, and prepare your data for use.
Earlier this year a friend sent me a video showing how he implemented a phone bill calculation challenge using Scala. I took a stab at it using Java + Data Pipeline and below is what I came up with.
How about you? How would you code this using your favourite language or framework?
Have you ever wanted to pull emails into Excel for analysis? Maybe you need to find the top companies contacting you for your sales team. Maybe you need to perform text or sentiment analysis on the contents of your messages. Or maybe you’re creating visualizations to better understand who’s emailing you. This quick guide will show you how to use Data Pipeline to search and read emails from Gmail or G Suite (formerly Google Apps), process them any way you like, and store them in Excel.
I was reading a blog at Java Code Geeks on how to create a Spring Batch ETL Job. What struck me about the example was the amount of code required by the framework for such a routine task. In this blog, you’ll see how to accomplish the same task of summarize a million stock trades to find the open, close, high, and low prices for each symbol using our Data Pipeline framework.
Being a data scientist means dedication to continuous learning. One great way to keep learning, improve your network, and get exposed to different views is to attend conferences. Here are several conferences for data scientist you should consider attending.
One question I like to ask in interviews is: how would you speed up inserts when using JDBC?
This simple question usually shows me how knowledgeable the developer is with databases in general and JDBC specifically.
If you ever find yourself needing to insert data quickly to a SQL database (and not just being asked it in an interview), here are some options to consider.
We’re excited to introduce Data Pipeline version 4.1, the second update on our 2016 roadmap.
This release features MongoDB integration, expression language additions, and improved transformations and joins. We’ve also thrown in a ton of examples for all the new 4.1 and 4.0 features. Enjoy. Continue reading