python data transport layer, mongodb, netezza, bigquery, postgresql files
Go to file
Steve Nyemba 69e0b4d946 documentation 2022-01-29 17:01:43 -06:00
bin bug fix: etl engine, sqlite inserts 2021-12-09 15:25:58 -06:00
transport bug fixes and simplifying interfaces 2022-01-29 11:15:45 -06:00
.gitignore documentation and housekeeping work 2019-09-17 11:53:44 -05:00
README.md documentation 2022-01-29 17:01:43 -06:00
requirements.txt S3 Requirments file 2017-09-26 16:10:14 -05:00
setup.py bug fixes and simplifying interfaces 2022-01-29 11:15:45 -06:00

README.md

Introduction

This project implements an abstraction of objects that can have access to a variety of data stores, implementing read/write with a simple and expressive interface. This abstraction works with NoSQL and SQL data stores and leverages pandas

The supported data store providers :

Provider Underlying Drivers Description
sqlite Native SQLite SQLite3
postgresql psycopg2 PostgreSQL
redshift psycopg2 Amazon Redshift
netezza nzpsql IBM Neteeza
Files: CSV, TSV pandas pandas data-frame
Couchdb cloudant Couchbase/Couchdb
mongodb pymongo Mongodb
mysql mysql Mysql
bigquery google-bigquery Google BigQuery
mariadb mysql Mariadb
rabbitmq pika RabbitMQ Publish/Subscribe

Why Use Data-Transport ?

Mostly data scientists that don't really care about the underlying database and would like to manipulate data transparently.

  1. Familiarity with pandas data-frames
  2. Connectivity drivers are included
  3. Useful for ETL

Installation

Within the virtual environment perform the following command:

pip install git+https://dev.the-phi.com/git/steve/data-transport.git

Binaries and eggs will be provided later on

Usage

In your code, perform the

import transport
from transport import factory
#
# importing a mongo reader
args = {"host":"<host>:<port>","dbname":"<database>","doc":"<doc_id>",["username":"<username>","password":"<password>"]}
reader = factory.instance(provider='mongodb',doc=<mydoc>,db=<db-name>)
#
# reading a document i.e just applying a find (no filters)
#
df    = mreader.read()  #-- pandas data frame
df.head()

#
# reading from postgresql

pgreader     = factory.instance(type='postgresql',database=<database>,table=<table_name>)
pg.read()   #-- will read the table by executing a SELECT
pg.read(sql=<sql query>)

#
# Reading a document and executing a view
#
document    = dreader.read()    
result      = couchdb.view(id='<design_doc_id>',view_name=<view_name',<key=value|keys=values>)