Who You Are
You have experience deploying robust, efficient web services and data pipelines. Equally important is your ability to ensure the served data is complete and valid. You have proven experience building functioning systems that handle large volumes of data. Ambiguity and change don’t bother you. In fact, you consider yourself to be extremely adaptable and flexible in the face of change.
About the Role
We are democratizing access to venture capital. Learn how you can invest with us.
The Data Engineer at FarmLogs will create and maintain APIs to serve the data that powers our application and research. Among other sources, FarmLogs uses satellite imagery, weather data, soil surveys, yield maps, elevation and fertilizer records. You will take our research concepts and engineer the processes for us to take them through to product deployment.
Tasks You’ll Be Juggling
- Creating efficient, clean APIs for accessing 3rd party data sets (eg. Planet Labs, SSURGO, PRISM)
- Working in a team to design and deploy datastores and APIs for internal data sets
- Deploying pipelines to run our models at scale (eg. we model millions of acres of farmland every day using local weather and soil types)
- Experience designing web services and APIs
- Demonstrated ability to build quality distributed systems – scaling, parallelizing, profiling, testing
- Experience deploying computationally demanding and data heavy pipelines
- Experience configuring and automating deployments (eg. AWS, Jenkins, Github, Sentry, Docker, Kubernetes, Terraform)
- 3+ years of experience with Python (we love Flask, Numpy, Scipy, Scikit, OpenCV, GDAL)
Nice to Have
- Experience with geospatial data (eg. GDAL, PostGIS) or imagery (eg. PIL, OpenCV)
- Agricultural background
To apply for this job please visit farmlogs.com.