python data transport layer, mongodb, netezza, bigquery, postgresql files
Go to file
Steve Nyemba 1eda49b63a documentation 2024-04-17 23:56:31 -05:00
bin bug fix: ETL, Mongodb 2024-04-02 20:59:01 -05:00
info refactoring version 2.0 2024-03-28 15:34:39 -05:00
notebooks documentation 2024-04-17 23:56:31 -05:00
transport bug fix: set function mongodb used for updates 2024-04-16 09:42:33 -05:00
.gitignore .. 2023-12-22 14:16:40 -06:00
README.md documentation 2024-04-17 23:56:31 -05:00
requirements.txt S3 Requirments file 2017-09-26 16:10:14 -05:00
setup.py bug fixes 2024-04-01 16:09:51 -05:00

README.md

Introduction

This project implements an abstraction of objects that can have access to a variety of data stores, implementing read/write with a simple and expressive interface. This abstraction works with NoSQL, SQL and Cloud data stores and leverages pandas.

Why Use Data-Transport ?

Mostly data scientists that don't really care about the underlying database and would like a simple and consistent way to read/write data and have will be well served. Additionally we implemented lightweight Extract Transform Loading API and command line (CLI) tool.

  1. Familiarity with pandas data-frames
  2. Connectivity drivers are included
  3. Mining data from various sources
  4. Useful for data migrations or ETL

Installation

Within the virtual environment perform the following :

pip install git+https://github.com/lnyemba/data-transport.git

Learn More

We have available notebooks with sample code to read/write against mongodb, couchdb, Netezza, PostgreSQL, Google Bigquery, Databricks, Microsoft SQL Server, MySQL ... Visit data-transport homepage