data-transport/README.md

122 lines
3.8 KiB
Markdown
Raw Normal View History

2017-08-07 15:21:00 +00:00
# Introduction
2017-08-07 15:06:12 +00:00
2022-01-29 23:01:43 +00:00
This project implements an abstraction of objects that can have access to a variety of data stores, implementing read/write with a simple and expressive interface. This abstraction works with **NoSQL** and **SQL** data stores and leverages **pandas**
2017-08-07 15:21:00 +00:00
2022-01-29 23:01:43 +00:00
The supported data store providers :
| Provider | Underlying Drivers | Description |
2022-01-29 23:18:20 +00:00
| :---- | :----: | ----: |
2022-01-29 23:01:43 +00:00
| sqlite| Native SQLite|SQLite3|
| postgresql| psycopg2 | PostgreSQL
| redshift| psycopg2 | Amazon Redshift
2022-01-29 23:18:20 +00:00
| s3| boto3 | Amazon Simple Storage Service
2022-01-29 23:01:43 +00:00
| netezza| nzpsql | IBM Neteeza
| Files: CSV, TSV| pandas| pandas data-frame
| Couchdb| cloudant | Couchbase/Couchdb
| mongodb| pymongo | Mongodb
| mysql| mysql| Mysql
| bigquery| google-bigquery| Google BigQuery
| mariadb| mysql| Mariadb
| rabbitmq|pika| RabbitMQ Publish/Subscribe
# Why Use Data-Transport ?
Mostly data scientists that don't really care about the underlying database and would like to manipulate data transparently.
1. Familiarity with **pandas data-frames**
2. Connectivity **drivers** are included
2022-01-29 23:18:20 +00:00
3. Useful for data migrations or ETL
2017-08-07 15:21:00 +00:00
2022-01-29 23:18:20 +00:00
# Usage
2019-09-17 16:53:44 +00:00
2022-01-29 23:18:20 +00:00
## Installation
2019-09-17 16:53:44 +00:00
2022-01-29 23:18:20 +00:00
Within the virtual environment perform the following :
2019-09-17 16:53:44 +00:00
pip install git+https://dev.the-phi.com/git/steve/data-transport.git
2022-03-12 18:25:29 +00:00
Once installed **data-transport** can be used as a library in code or a command line interface (CLI)
2019-09-17 16:53:44 +00:00
2022-03-12 18:25:29 +00:00
## Data Transport as a Library (in code)
---
2017-08-07 15:21:00 +00:00
2022-03-12 18:25:29 +00:00
The data-transport can be used within code as a library
* Read/Write against [mongodb](https://github.com/lnyemba/data-transport/wiki/mongodb)
* Read/Write against tranditional [RDBMS](https://github.com/lnyemba/data-transport/wiki/rdbms)
* Read/Write against [bigquery](https://github.com/lnyemba/data-transport/wiki/bigquery)
The read/write functions make data-transport a great candidate for **data-science**; **data-engineering** or all things pertaining to data. It enables operations across multiple data-stores(relational or not)
## Command Line Interface (CLI)
---
The CLI program is called **transport** and it requires a configuratio file
```
[
{
"id":"logs",
"source":{
"provider":"postgresql","context":"read","database":"mydb",
"cmd":{"sql":"SELECT * FROM logs limit 10"}
},
"target":{
"provider":"bigquery","private_key":"/bgqdrive/account/bq-service-account-key.json",
"dataset":"mydataset"
}
},
]
```
Assuming the above content is stored in a file called **etl-config.json**, we would perform the following in a terminal window:
```
[steve@data-transport]$ transport --config ./etl-config.json [--index <value>]
```
2022-01-29 23:18:20 +00:00
**Reading/Writing Mongodb**
For this example we assume here we are tunneling through port 27018 and there is not access control:
```
import transport
reader = factory.instance(provider='mongodb',context='read',host='localhost',port='27018',db='example',doc='logs')
df = reader.read() #-- reads the entire collection
print (df.head())
#
#-- Applying mongodb command
PIPELINE = [{"$group":{"_id":None,"count":{"$sum":1}}}]
_command_={"cursor":{},"allowDiskUse":True,"aggregate":"logs","pipeline":PIPLINE}
df = reader.read(mongo=_command)
print (df.head())
reader.close()
```
**Writing to Mongodb**
---
```
import transport
improt pandas as pd
writer = factory.instance(provider='mongodb',context='write',host='localhost',port='27018',db='example',doc='logs')
df = pd.DataFrame({"names":["steve","nico"],"age":[40,30]})
writer.write(df)
writer.close()
```
2017-08-07 15:21:00 +00:00
2022-01-29 23:01:43 +00:00
2019-09-17 16:21:42 +00:00
#
2022-01-29 23:01:43 +00:00
# reading from postgresql
pgreader = factory.instance(type='postgresql',database=<database>,table=<table_name>)
pg.read() #-- will read the table by executing a SELECT
pg.read(sql=<sql query>)
2017-08-07 15:21:00 +00:00
#
2019-09-17 16:21:42 +00:00
# Reading a document and executing a view
2017-08-07 15:21:00 +00:00
#
2019-09-17 16:21:42 +00:00
document = dreader.read()
result = couchdb.view(id='<design_doc_id>',view_name=<view_name',<key=value|keys=values>)