Adatok elérése, a DataFrame adatszerkezet, fájlok be- és kiolvasása

A pandas csomagot a szokásos import szintaxissal töltjük be:

In [1]:
import pandas as pd

Egy szöveges táblázatot a read_csv nevű függvénnyel olvashatunk be. A kapott táblázat egy DataFrame típusú objektum lesz.

In [2]:
df = pd.read_csv("data/kisnevsor.csv")
In [3]:
type(df)
Out[3]:
pandas.core.frame.DataFrame

Ha csak kiiratjuk a táblázatot, akkor egy kellemesen formázott nézetet kapunk.

  • Az első sorban az oszlopok nevei vannak.
  • Az első oszlopban a sorok azonosítói, az úgynevezett indexek vannak.
  • A táblázat maradék részében pedig az adatok foglalnak helyet.
In [4]:
df
Out[4]:
Név Eszter Orsi Nem Kor Dátum
0 Bálint 2 . fiú 20 12:31
1 Csenge 4 4 lány 22 13:20
2 István 5 4 fiú 19 12:35
3 Zita 3 5 lány 20 14:50
4 Károly 4 . fiú 21 14:55

A táblázat egy oszlopát a nevének megadásával érhetjük el legegyszerűbben, mintha (sőt, ténylegesen) egy dictionary egyik elemét szeretnénk elérni.

In [5]:
df['Eszter']
Out[5]:
0    2
1    4
2    5
3    3
4    4
Name: Eszter, dtype: int64

Az oszlopokat egy összetett objektumként kapjuk vissza, ami még tartalmazza az indexet és az oszlop nevét is. Ennek a neve Series.

In [6]:
type(df['Eszter'])
Out[6]:
pandas.core.series.Series

Legtöbb számításkor ezekkel az objektumokkal is dolgozhatunk a numpy arrayekhez hasonlóan.

In [7]:
df['Eszter']**2
Out[7]:
0     4
1    16
2    25
3     9
4    16
Name: Eszter, dtype: int64

De lekérhetjük magát az adatokat tároló numpy tömböt is.

In [8]:
df['Eszter'].values
Out[8]:
array([2, 4, 5, 3, 4])

További elérést tesz lehetővé a loc konstrukció, amivel oszlopokat és sorokat is lekérhetünk a neveik alapján.

Oszlop

In [9]:
df.loc[:,'Eszter']
Out[9]:
0    2
1    4
2    5
3    3
4    4
Name: Eszter, dtype: int64

Sor. A sorok is Series objektumok.

In [10]:
df.loc[1,:]
Out[10]:
Név       Csenge
Eszter         4
Orsi           4
Nem         lány
Kor           22
Dátum      13:20
Name: 1, dtype: object

Egyetlen elemet pedig a következő módon érünk el.

In [11]:
df.loc[0,'Eszter']
Out[11]:
2

Később még visszatérünk rá, hogy mi mindent lehet csinálni egy ilyen táblázattal. Most viszont nézzük a beolvasást!

Beolvasás

read_csv()

A python nyelv talán leggyorsabb és legtöbb funkcionalitással rendelkező táblázatbeolvasó függvénye a pandas modul read_csv függvénye.

Most pár fontos paraméterét tekintjük át.

Értékelválasztó karakter: sep .

Legtöbbször vesszővel vagy tabulátorral elválasztott táblázatot (amit a sep kulcsszóval állíthatunk be, ha szükséges) olvasunk be egyszerű szövegfájlokból.

In [13]:
df=pd.read_csv("data/kisnevsor.csv",sep=',')
df.head()
Out[13]:
Név Eszter Orsi Nem Kor Dátum
0 Bálint 2 . fiú 20 12:31
1 Csenge 4 4 lány 22 13:20
2 István 5 4 fiú 19 12:35
3 Zita 3 5 lány 20 14:50
4 Károly 4 . fiú 21 14:55

Ha rosszul állítjuk be, akkor jellemzően minden sorunk egy értékből fog állni.

In [14]:
df=pd.read_csv("data/kisnevsor.csv",sep=' ')
df
Out[14]:
Név,Eszter,Orsi,Nem,Kor,Dátum
0 Bálint,2,.,fiú,20,12:31
1 Csenge,4,4,lány,22,13:20
2 István,5,4,fiú,19,12:35
3 Zita,3,5,lány,20,14:50
4 Károly,4,.,fiú,21,14:55

Vagy arra panaszkodik a függvény, hogy nem azonos számú oszlop van minden sorban.

In [15]:
df=pd.read_csv("data/kisnevsor.csv",sep='n')
---------------------------------------------------------------------------
ParserError                               Traceback (most recent call last)
<ipython-input-15-e7a456668c06> in <module>()
----> 1 df=pd.read_csv("data/kisnevsor.csv",sep='n')

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
    653                     skip_blank_lines=skip_blank_lines)
    654 
--> 655         return _read(filepath_or_buffer, kwds)
    656 
    657     parser_f.__name__ = name

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    409 
    410     try:
--> 411         data = parser.read(nrows)
    412     finally:
    413         parser.close()

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
    980                 raise ValueError('skipfooter not supported for iteration')
    981 
--> 982         ret = self._engine.read(nrows)
    983 
    984         if self.options.get('as_recarray'):

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
   1717     def read(self, nrows=None):
   1718         try:
-> 1719             data = self._reader.read(nrows)
   1720         except StopIteration:
   1721             if self._first_chunk:

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read (pandas/_libs/parsers.c:10862)()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory (pandas/_libs/parsers.c:11138)()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows (pandas/_libs/parsers.c:11884)()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows (pandas/_libs/parsers.c:11755)()

pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error (pandas/_libs/parsers.c:28765)()

ParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 3

header (fejléc)

Meg tudjuk adni, hogy van-e fejléc, és melyik sorban van.

In [16]:
df=pd.read_csv("data/kisnevsor.csv",header=0)
df
Out[16]:
Név Eszter Orsi Nem Kor Dátum
0 Bálint 2 . fiú 20 12:31
1 Csenge 4 4 lány 22 13:20
2 István 5 4 fiú 19 12:35
3 Zita 3 5 lány 20 14:50
4 Károly 4 . fiú 21 14:55

Ha nincs fejléc, akkor a fájl első sora is a táblázat első sora lesz.

In [17]:
df=pd.read_csv("data/kisnevsor.csv",header=None)
df
Out[17]:
0 1 2 3 4 5
0 Név Eszter Orsi Nem Kor Dátum
1 Bálint 2 . fiú 20 12:31
2 Csenge 4 4 lány 22 13:20
3 István 5 4 fiú 19 12:35
4 Zita 3 5 lány 20 14:50
5 Károly 4 . fiú 21 14:55

Ha a sokadik sort adjuk meg, akkor a táblázat onnan fog kezdődni.

In [18]:
df=pd.read_csv("data/kisnevsor.csv",header=3)
df
Out[18]:
István 5 4 fiú 19 12:35
0 Zita 3 5 lány 20 14:50
1 Károly 4 . fiú 21 14:55

index_col (indexoszlop)

  • Az indexoszlopot kicsit máshogy jeleníti meg a notebook.
In [19]:
df=pd.read_csv("data/kisnevsor.csv",index_col='Név')
df.head()
Out[19]:
Eszter Orsi Nem Kor Dátum
Név
Bálint 2 . fiú 20 12:31
Csenge 4 4 lány 22 13:20
István 5 4 fiú 19 12:35
Zita 3 5 lány 20 14:50
Károly 4 . fiú 21 14:55

Ez a most beállított indexoszlop hasonlóan viselkedik az előbb látott számokból álló indexhez.

In [20]:
df['Eszter']
Out[20]:
Név
Bálint    2
Csenge    4
István    5
Zita      3
Károly    4
Name: Eszter, dtype: int64
In [21]:
df.loc['Bálint',:]
Out[21]:
Eszter        2
Orsi          .
Nem         fiú
Kor          20
Dátum     12:31
Name: Bálint, dtype: object

nrows

Ha egy nagyon hosszú fájlnak csak az első pár sorát akarjuk beolvasni.

In [22]:
df=pd.read_csv("data/kisnevsor.csv",nrows=2)
df
Out[22]:
Név Eszter Orsi Nem Kor Dátum
0 Bálint 2 . fiú 20 12:31
1 Csenge 4 4 lány 22 13:20

na_values

Ha valaki úgy adja meg a hiányzó értékeket, hogy mondjuk beír egy pontot a helyre. Ez majd akkor lehet fontos, ha például átlagot számolunk, mert kihagyhatjuk az átlagolásból a NaN-értékeket.

In [23]:
df=pd.read_csv("data/kisnevsor.csv",na_values=['.'])
df
Out[23]:
Név Eszter Orsi Nem Kor Dátum
0 Bálint 2 NaN fiú 20 12:31
1 Csenge 4 4.0 lány 22 13:20
2 István 5 4.0 fiú 19 12:35
3 Zita 3 5.0 lány 20 14:50
4 Károly 4 NaN fiú 21 14:55

parse_dates (dátumok kezelése)

Alapból a beolvasó nem csinál semmit a dátumnak vagy időnek kinéző oszlopokkal.

In [24]:
df=pd.read_csv("data/kisnevsor.csv")
df
Out[24]:
Név Eszter Orsi Nem Kor Dátum
0 Bálint 2 . fiú 20 12:31
1 Csenge 4 4 lány 22 13:20
2 István 5 4 fiú 19 12:35
3 Zita 3 5 lány 20 14:50
4 Károly 4 . fiú 21 14:55
In [25]:
df['Dátum']
Out[25]:
0    12:31
1    13:20
2    12:35
3    14:50
4    14:55
Name: Dátum, dtype: object

De megkérhetjük rá.

In [26]:
df=pd.read_csv("data/kisnevsor.csv",parse_dates=['Dátum'])
df
Out[26]:
Név Eszter Orsi Nem Kor Dátum
0 Bálint 2 . fiú 20 2017-05-17 12:31:00
1 Csenge 4 4 lány 22 2017-05-17 13:20:00
2 István 5 4 fiú 19 2017-05-17 12:35:00
3 Zita 3 5 lány 20 2017-05-17 14:50:00
4 Károly 4 . fiú 21 2017-05-17 14:55:00
In [27]:
df['Dátum']
Out[27]:
0   2017-05-17 12:31:00
1   2017-05-17 13:20:00
2   2017-05-17 12:35:00
3   2017-05-17 14:50:00
4   2017-05-17 14:55:00
Name: Dátum, dtype: datetime64[ns]

compression (tömörített fájlok olvasása)

Mivel tömörítve kisebb helyet foglalnak el az adatok, gyorsabb lehet nagy adatfájlokat tömörített állományokból beolvasni.

In [28]:
df=pd.read_csv("data/kisnevsor.csv.gz")
df
Out[28]:
Név Eszter Orsi Nem Kor Dátum
0 Bálint 2 . fiú 20 12:31
1 Csenge 4 4 lány 22 13:20
2 István 5 4 fiú 19 12:35
3 Zita 3 5 lány 20 14:50
4 Károly 4 . fiú 21 14:55

Nagy fájlokat darabokban is beolvashatunk

Figyeljük meg, hogy a darabok oszlopnevei jól vannak megadva.

In [29]:
for darab in pd.read_csv("data/kisnevsor.csv.gz",iterator=True,chunksize=2):
    print(darab)
    # itt csinalunk vele valamit
      Név  Eszter Orsi   Nem  Kor  Dátum
0  Bálint       2    .   fiú   20  12:31
1  Csenge       4    4  lány   22  13:20
      Név  Eszter  Orsi   Nem  Kor  Dátum
2  István       5     4   fiú   19  12:35
3    Zita       3     5  lány   20  14:50
      Név  Eszter Orsi  Nem  Kor  Dátum
4  Károly       4    .  fiú   21  14:55

Nagy fájlok esetén gyorsabban tudunk beolvasni, ha csak pár oszlopra van szükségünk

A usecols az oszlop nevét és a sorszámát is elfogadja.

In [33]:
pd.read_csv("data/kisnevsor.csv.gz",usecols=['Név',1])
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-33-c3d51dfcfc7b> in <module>()
----> 1 pd.read_csv("data/kisnevsor.csv.gz",usecols=['Név',1])

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
    653                     skip_blank_lines=skip_blank_lines)
    654 
--> 655         return _read(filepath_or_buffer, kwds)
    656 
    657     parser_f.__name__ = name

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    403 
    404     # Create the parser.
--> 405     parser = TextFileReader(filepath_or_buffer, **kwds)
    406 
    407     if chunksize or iterator:

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in __init__(self, f, engine, **kwds)
    760             self.options['has_index_names'] = kwds['has_index_names']
    761 
--> 762         self._make_engine(self.engine)
    763 
    764     def close(self):

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in _make_engine(self, engine)
    964     def _make_engine(self, engine='c'):
    965         if engine == 'c':
--> 966             self._engine = CParserWrapper(self.f, **self.options)
    967         else:
    968             if engine == 'python':

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in __init__(self, src, **kwds)
   1584         # XXX
   1585         self.usecols, self.usecols_dtype = _validate_usecols_arg(
-> 1586             self._reader.usecols)
   1587 
   1588         passed_names = self.names is None

/home/horvath.anna/.local/lib/python3.5/site-packages/pandas/io/parsers.py in _validate_usecols_arg(usecols)
   1103         if usecols_dtype not in ('empty', 'integer',
   1104                                  'string', 'unicode'):
-> 1105             raise ValueError(msg)
   1106 
   1107         return set(usecols), usecols_dtype

ValueError: 'usecols' must either be all strings, all unicode, all integers or a callable

További beállításokat itt találhatunk:

In [34]:
help(pd.read_csv)
Help on function read_csv in module pandas.io.parsers:

read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=False, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, skip_footer=0, doublequote=True, delim_whitespace=False, as_recarray=False, compact_ints=False, use_unsigned=False, low_memory=True, buffer_lines=None, memory_map=False, float_precision=None)
    Read CSV (comma-separated) file into DataFrame
    
    Also supports optionally iterating or breaking of the file
    into chunks.
    
    Additional help can be found in the `online docs for IO Tools
    <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.
    
    Parameters
    ----------
    filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any object with a read() method (such as a file handle or StringIO)
        The string could be a URL. Valid URL schemes include http, ftp, s3, and
        file. For file URLs, a host is expected. For instance, a local file could
        be file ://localhost/path/to/table.csv
    sep : str, default ','
        Delimiter to use. If sep is None, the C engine cannot automatically detect
        the separator, but the Python parsing engine can, meaning the latter will
        be used automatically. In addition, separators longer than 1 character and
        different from ``'\s+'`` will be interpreted as regular expressions and
        will also force the use of the Python parsing engine. Note that regex
        delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``
    delimiter : str, default ``None``
        Alternative argument name for sep.
    delim_whitespace : boolean, default False
        Specifies whether or not whitespace (e.g. ``' '`` or ``'    '``) will be
        used as the sep. Equivalent to setting ``sep='\s+'``. If this option
        is set to True, nothing should be passed in for the ``delimiter``
        parameter.
    
        .. versionadded:: 0.18.1 support for the Python parser.
    
    header : int or list of ints, default 'infer'
        Row number(s) to use as the column names, and the start of the data.
        Default behavior is as if set to 0 if no ``names`` passed, otherwise
        ``None``. Explicitly pass ``header=0`` to be able to replace existing
        names. The header can be a list of integers that specify row locations for
        a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not
        specified will be skipped (e.g. 2 in this example is skipped). Note that
        this parameter ignores commented lines and empty lines if
        ``skip_blank_lines=True``, so header=0 denotes the first line of data
        rather than the first line of the file.
    names : array-like, default None
        List of column names to use. If file contains no header row, then you
        should explicitly pass header=None. Duplicates in this list are not
        allowed unless mangle_dupe_cols=True, which is the default.
    index_col : int or sequence or False, default None
        Column to use as the row labels of the DataFrame. If a sequence is given, a
        MultiIndex is used. If you have a malformed file with delimiters at the end
        of each line, you might consider index_col=False to force pandas to _not_
        use the first column as the index (row names)
    usecols : array-like or callable, default None
        Return a subset of the columns. If array-like, all elements must either
        be positional (i.e. integer indices into the document columns) or strings
        that correspond to column names provided either by the user in `names` or
        inferred from the document header row(s). For example, a valid array-like
        `usecols` parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
    
        If callable, the callable function will be evaluated against the column
        names, returning names where the callable function evaluates to True. An
        example of a valid callable argument would be ``lambda x: x.upper() in
        ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
        parsing time and lower memory usage.
    as_recarray : boolean, default False
        DEPRECATED: this argument will be removed in a future version. Please call
        `pd.read_csv(...).to_records()` instead.
    
        Return a NumPy recarray instead of a DataFrame after parsing the data.
        If set to True, this option takes precedence over the `squeeze` parameter.
        In addition, as row indices are not available in such a format, the
        `index_col` parameter will be ignored.
    squeeze : boolean, default False
        If the parsed data only contains one column then return a Series
    prefix : str, default None
        Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...
    mangle_dupe_cols : boolean, default True
        Duplicate columns will be specified as 'X.0'...'X.N', rather than
        'X'...'X'. Passing in False will cause data to be overwritten if there
        are duplicate names in the columns.
    dtype : Type name or dict of column -> type, default None
        Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32}
        Use `str` or `object` to preserve and not interpret dtype.
        If converters are specified, they will be applied INSTEAD
        of dtype conversion.
    engine : {'c', 'python'}, optional
        Parser engine to use. The C engine is faster while the python engine is
        currently more feature-complete.
    converters : dict, default None
        Dict of functions for converting values in certain columns. Keys can either
        be integers or column labels
    true_values : list, default None
        Values to consider as True
    false_values : list, default None
        Values to consider as False
    skipinitialspace : boolean, default False
        Skip spaces after delimiter.
    skiprows : list-like or integer or callable, default None
        Line numbers to skip (0-indexed) or number of lines to skip (int)
        at the start of the file.
    
        If callable, the callable function will be evaluated against the row
        indices, returning True if the row should be skipped and False otherwise.
        An example of a valid callable argument would be ``lambda x: x in [0, 2]``.
    skipfooter : int, default 0
        Number of lines at bottom of file to skip (Unsupported with engine='c')
    skip_footer : int, default 0
        DEPRECATED: use the `skipfooter` parameter instead, as they are identical
    nrows : int, default None
        Number of rows of file to read. Useful for reading pieces of large files
    na_values : scalar, str, list-like, or dict, default None
        Additional strings to recognize as NA/NaN. If dict passed, specific
        per-column NA values.  By default the following values are interpreted as
        NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',
        '1.#IND', '1.#QNAN', 'N/A', 'NA', 'NULL', 'NaN', 'nan'`.
    keep_default_na : bool, default True
        If na_values are specified and keep_default_na is False the default NaN
        values are overridden, otherwise they're appended to.
    na_filter : boolean, default True
        Detect missing value markers (empty strings and the value of na_values). In
        data without any NAs, passing na_filter=False can improve the performance
        of reading a large file
    verbose : boolean, default False
        Indicate number of NA values placed in non-numeric columns
    skip_blank_lines : boolean, default True
        If True, skip over blank lines rather than interpreting as NaN values
    parse_dates : boolean or list of ints or names or list of lists or dict, default False
    
        * boolean. If True -> try parsing the index.
        * list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
          each as a separate date column.
        * list of lists. e.g.  If [[1, 3]] -> combine columns 1 and 3 and parse as
          a single date column.
        * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result
          'foo'
    
        If a column or index contains an unparseable date, the entire column or
        index will be returned unaltered as an object data type. For non-standard
        datetime parsing, use ``pd.to_datetime`` after ``pd.read_csv``
    
        Note: A fast-path exists for iso8601-formatted dates.
    infer_datetime_format : boolean, default False
        If True and parse_dates is enabled, pandas will attempt to infer the format
        of the datetime strings in the columns, and if it can be inferred, switch
        to a faster method of parsing them. In some cases this can increase the
        parsing speed by 5-10x.
    keep_date_col : boolean, default False
        If True and parse_dates specifies combining multiple columns then
        keep the original columns.
    date_parser : function, default None
        Function to use for converting a sequence of string columns to an array of
        datetime instances. The default uses ``dateutil.parser.parser`` to do the
        conversion. Pandas will try to call date_parser in three different ways,
        advancing to the next if an exception occurs: 1) Pass one or more arrays
        (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
        string values from the columns defined by parse_dates into a single array
        and pass that; and 3) call date_parser once for each row using one or more
        strings (corresponding to the columns defined by parse_dates) as arguments.
    dayfirst : boolean, default False
        DD/MM format dates, international and European format
    iterator : boolean, default False
        Return TextFileReader object for iteration or getting chunks with
        ``get_chunk()``.
    chunksize : int, default None
        Return TextFileReader object for iteration.
        See the `IO Tools docs
        <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
        for more information on ``iterator`` and ``chunksize``.
    compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
        For on-the-fly decompression of on-disk data. If 'infer', then use gzip,
        bz2, zip or xz if filepath_or_buffer is a string ending in '.gz', '.bz2',
        '.zip', or 'xz', respectively, and no decompression otherwise. If using
        'zip', the ZIP file must contain only one data file to be read in.
        Set to None for no decompression.
    
        .. versionadded:: 0.18.1 support for 'zip' and 'xz' compression.
    
    thousands : str, default None
        Thousands separator
    decimal : str, default '.'
        Character to recognize as decimal point (e.g. use ',' for European data).
    float_precision : string, default None
        Specifies which converter the C engine should use for floating-point
        values. The options are `None` for the ordinary converter,
        `high` for the high-precision converter, and `round_trip` for the
        round-trip converter.
    lineterminator : str (length 1), default None
        Character to break file into lines. Only valid with C parser.
    quotechar : str (length 1), optional
        The character used to denote the start and end of a quoted item. Quoted
        items can include the delimiter and it will be ignored.
    quoting : int or csv.QUOTE_* instance, default 0
        Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
        QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
    doublequote : boolean, default ``True``
       When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate
       whether or not to interpret two consecutive quotechar elements INSIDE a
       field as a single ``quotechar`` element.
    escapechar : str (length 1), default None
        One-character string used to escape delimiter when quoting is QUOTE_NONE.
    comment : str, default None
        Indicates remainder of line should not be parsed. If found at the beginning
        of a line, the line will be ignored altogether. This parameter must be a
        single character. Like empty lines (as long as ``skip_blank_lines=True``),
        fully commented lines are ignored by the parameter `header` but not by
        `skiprows`. For example, if comment='#', parsing '#empty\na,b,c\n1,2,3'
        with `header=0` will result in 'a,b,c' being
        treated as the header.
    encoding : str, default None
        Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python
        standard encodings
        <https://docs.python.org/3/library/codecs.html#standard-encodings>`_
    dialect : str or csv.Dialect instance, default None
        If provided, this parameter will override values (default or not) for the
        following parameters: `delimiter`, `doublequote`, `escapechar`,
        `skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to
        override values, a ParserWarning will be issued. See csv.Dialect
        documentation for more details.
    tupleize_cols : boolean, default False
        Leave a list of tuples on columns as is (default is to convert to
        a Multi Index on the columns)
    error_bad_lines : boolean, default True
        Lines with too many fields (e.g. a csv line with too many commas) will by
        default cause an exception to be raised, and no DataFrame will be returned.
        If False, then these "bad lines" will dropped from the DataFrame that is
        returned.
    warn_bad_lines : boolean, default True
        If error_bad_lines is False, and warn_bad_lines is True, a warning for each
        "bad line" will be output.
    low_memory : boolean, default True
        Internally process the file in chunks, resulting in lower memory use
        while parsing, but possibly mixed type inference.  To ensure no mixed
        types either set False, or specify the type with the `dtype` parameter.
        Note that the entire file is read into a single DataFrame regardless,
        use the `chunksize` or `iterator` parameter to return the data in chunks.
        (Only valid with C parser)
    buffer_lines : int, default None
        DEPRECATED: this argument will be removed in a future version because its
        value is not respected by the parser
    compact_ints : boolean, default False
        DEPRECATED: this argument will be removed in a future version
    
        If compact_ints is True, then for any column that is of integer dtype,
        the parser will attempt to cast it as the smallest integer dtype possible,
        either signed or unsigned depending on the specification from the
        `use_unsigned` parameter.
    use_unsigned : boolean, default False
        DEPRECATED: this argument will be removed in a future version
    
        If integer columns are being compacted (i.e. `compact_ints=True`), specify
        whether the column should be compacted to the smallest signed or unsigned
        integer dtype.
    memory_map : boolean, default False
        If a filepath is provided for `filepath_or_buffer`, map the file object
        directly onto memory and access the data directly from there. Using this
        option can improve performance because there is no longer any I/O overhead.
    
    Returns
    -------
    result : DataFrame or TextParser

Kiírás csv file-ba

Alapértelmezetten a pandas kiírja az indexeket is.

In [35]:
df.to_csv('tmp.tsv')
In [36]:
%cat tmp.tsv
,Név,Eszter,Orsi,Nem,Kor,Dátum
0,Bálint,2,.,fiú,20,12:31
1,Csenge,4,4,lány,22,13:20
2,István,5,4,fiú,19,12:35
3,Zita,3,5,lány,20,14:50
4,Károly,4,.,fiú,21,14:55

Ha ezt nem akarjuk:

In [37]:
df.to_csv('tmp.csv',index=False)
In [38]:
%cat tmp.tsv
,Név,Eszter,Orsi,Nem,Kor,Dátum
0,Bálint,2,.,fiú,20,12:31
1,Csenge,4,4,lány,22,13:20
2,István,5,4,fiú,19,12:35
3,Zita,3,5,lány,20,14:50
4,Károly,4,.,fiú,21,14:55

Az elválasztót itt is a sep argumentummal állíthatjuk be.

In [39]:
df.to_csv('tmp.tsv',sep='\t')
In [40]:
%cat tmp.tsv
	Név	Eszter	Orsi	Nem	Kor	Dátum
0	Bálint	2	.	fiú	20	12:31
1	Csenge	4	4	lány	22	13:20
2	István	5	4	fiú	19	12:35
3	Zita	3	5	lány	20	14:50
4	Károly	4	.	fiú	21	14:55

Írhatunk tömörítve.

In [41]:
df.to_csv('tmp.csv.gz')

A lebegőpontos számok formátumát is beállíthatjuk.

In [42]:
df.to_csv('tmp.csv',float_format='%.2f')
In [43]:
%cat tmp.csv
,Név,Eszter,Orsi,Nem,Kor,Dátum
0,Bálint,2,.,fiú,20,12:31
1,Csenge,4,4,lány,22,13:20
2,István,5,4,fiú,19,12:35
3,Zita,3,5,lány,20,14:50
4,Károly,4,.,fiú,21,14:55

További beolvasó-kiíró függvények:

Tudunk excel file-t is beolvasni.

In [45]:
df=pd.read_excel('data/kisnevsor.xlsx')
df
Out[45]:
Eszter Orsi Nem Kor
Bálint 2 3 fiú 20
Csenge 4 4 lány 22
István 5 4 fiú 19
Zita 3 5 lány 20

És írni.

In [48]:
df.to_excel('tmp.xlsx')

dictionary olvasása

Különböző webes szolgáltatásokból gyakran egy Python dictionary-szerű stringként, ún. JSON-formátumban kapjuk meg a kért adatokat. Ezeket dictionary-vé a json könyvtár függvényeivel alakíthatjuk.

In [49]:
import json

A data/json_example fájl minden sora egy ilyen szótár, amit a Google Geocoding API-ból kaptunk eredményül. Töltsük be ezeket a sorokat egy dictionary-ket tartalmazó listába a következő parancssal:

In [50]:
d=[json.loads(s) for s in open("data/json_example").readlines()]
d[0:2]
Out[50]:
[{'id': '3040051',
  'query': 'les+Escaldes+AD',
  'results': [{'address_components': [{'long_name': 'Les Escaldes',
      'short_name': 'Les Escaldes',
      'types': ['locality', 'political']},
     {'long_name': 'Escaldes-Engordany',
      'short_name': 'Escaldes-Engordany',
      'types': ['administrative_area_level_1', 'political']},
     {'long_name': 'Andorra',
      'short_name': 'AD',
      'types': ['country', 'political']},
     {'long_name': 'AD700', 'short_name': 'AD700', 'types': ['postal_code']}],
    'formatted_address': 'AD700 Les Escaldes, Andorra',
    'geometry': {'bounds': {'northeast': {'lat': 42.5168669, 'lng': 1.5532685},
      'southwest': {'lat': 42.5067774, 'lng': 1.5285531}},
     'location': {'lat': 42.5100804, 'lng': 1.5387862},
     'location_type': 'APPROXIMATE',
     'viewport': {'northeast': {'lat': 42.5168669, 'lng': 1.5532685},
      'southwest': {'lat': 42.5067774, 'lng': 1.5285531}}},
    'place_id': 'ChIJaxpK9OKKpRIRtp4e8lTF3v0',
    'types': ['locality', 'political']}],
  'status': 'OK'},
 {'id': '3040051',
  'query': 'les+Escaldes+AD',
  'results': [{'address_components': [{'long_name': 'Les Escaldes',
      'short_name': 'Les Escaldes',
      'types': ['locality', 'political']},
     {'long_name': 'Escaldes-Engordany',
      'short_name': 'Escaldes-Engordany',
      'types': ['administrative_area_level_1', 'political']},
     {'long_name': 'Andorra',
      'short_name': 'AD',
      'types': ['country', 'political']},
     {'long_name': 'AD700', 'short_name': 'AD700', 'types': ['postal_code']}],
    'formatted_address': 'AD700 Les Escaldes, Andorra',
    'geometry': {'bounds': {'northeast': {'lat': 42.5168669, 'lng': 1.5532685},
      'southwest': {'lat': 42.5067774, 'lng': 1.5285531}},
     'location': {'lat': 42.5100804, 'lng': 1.5387862},
     'location_type': 'APPROXIMATE',
     'viewport': {'northeast': {'lat': 42.5168669, 'lng': 1.5532685},
      'southwest': {'lat': 42.5067774, 'lng': 1.5285531}}},
    'place_id': 'ChIJaxpK9OKKpRIRtp4e8lTF3v0',
    'types': ['locality', 'political']}],
  'status': 'OK'}]

Látszik, hogy ugyanazok a kulcsok ismétlődnek minden listaelemben: célszerűbb lenne ezekből a sorokból tehát egy táblázatot készítenünk!

In [51]:
pd.DataFrame.from_dict(d)
Out[51]:
id query results status
0 3040051 les+Escaldes+AD [{'types': ['locality', 'political'], 'address... OK
1 3040051 les+Escaldes+AD [{'types': ['locality', 'political'], 'address... OK
2 3040051 les+Escaldes+AD [{'types': ['locality', 'political'], 'address... OK
3 3041563 Andorra+la+Vella+AD [{'types': ['locality', 'political'], 'address... OK
4 290594 Umm+al+Qaywayn+AE [{'types': ['administrative_area_level_1', 'po... OK
5 291074 Ras+al-Khaimah+AE [{'types': ['locality', 'political'], 'address... OK
6 3040051 les+Escaldes+AD [{'types': ['locality', 'political'], 'address... OK
7 3041563 Andorra+la+Vella+AD [{'types': ['locality', 'political'], 'address... OK
8 290594 Umm+al+Qaywayn+AE [{'types': ['administrative_area_level_1', 'po... OK
9 291074 Ras+al-Khaimah+AE [{'types': ['locality', 'political'], 'address... OK
10 291696 Khawr+Fakkan+AE [{'types': ['locality', 'political'], 'address... OK
11 292223 Dubai+AE [{'types': ['locality', 'political'], 'address... OK
12 292231 Dibba+Al-Fujairah+AE [{'types': ['locality', 'political'], 'address... OK
13 292239 Dibba+Al-Hisn+AE [{'types': ['locality', 'political'], 'address... OK
14 292672 Sharjah+AE [{'types': ['locality', 'political'], 'address... OK
15 292688 Ar+Ruways+AE [{'types': ['locality', 'political'], 'address... OK
16 292878 Al+Fujayrah+AE [{'types': ['administrative_area_level_1', 'po... OK
17 292913 Al+Ain+AE [{'types': ['locality', 'political'], 'address... OK
18 292932 Ajman+AE [{'types': ['locality', 'political'], 'address... OK
19 292953 Adh+Dhayd+AE [{'types': ['locality', 'political'], 'address... OK
20 292968 Abu+Dhabi+AE [{'types': ['locality', 'political'], 'address... OK
21 1120985 Zaranj+AF [{'types': ['locality', 'political'], 'address... OK
22 1123004 Taloqan+AF [{'types': ['locality', 'political'], 'address... OK
23 1125155 Shindand+AF [{'types': ['airport', 'establishment', 'point... OK
24 1125444 Shibirghan+AF [{'types': ['locality', 'political'], 'address... OK
25 1125896 Shahrak+AF [{'types': ['administrative_area_level_2', 'po... OK
26 1127110 Sar-e+Pul+AF [{'types': ['administrative_area_level_1', 'po... OK
27 1127628 Sang-e+Charak+AF [{'types': ['administrative_area_level_2', 'po... OK
28 1127768 Aibak+AF [{'formatted_address': 'Aybak, Afghanistan', '... OK
29 1128265 Rustaq+AF [{'types': ['locality', 'political'], 'address... OK
30 1129516 Qarqin+AF [{'types': ['locality', 'political'], 'address... OK
31 1129648 Qarawul+AF [{'formatted_address': 'Hazart Imam, Afghanist... OK
32 1130490 Pul-e+Khumri+AF [{'types': ['locality', 'political'], 'address... OK
33 1131316 Paghman+AF [{'formatted_address': 'Paghman, Afghanistan',... OK
34 1132495 Nahrin+AF [{'formatted_address': 'Nahrain, Afghanistan',... OK
35 1133453 Maymana+AF [{'types': ['locality', 'political'], 'address... OK
36 1133574 Mehtar+Lam+AF [{'types': ['locality', 'political'], 'address... OK
37 1133616 Mazar-e+Sharif+AF [{'types': ['locality', 'political'], 'address... OK
38 1134720 Lashkar+Gah+AF [{'types': ['locality', 'political'], 'address... OK
39 1135158 Kushk+AF [{'formatted_address': 'Kūšk, Afghanistan', 'p... OK
40 1135689 Kunduz+AF [{'types': ['locality', 'political'], 'address... OK
41 1136469 Khost+AF [{'types': ['locality', 'political'], 'address... OK
42 1136575 Khulm+AF [{'types': ['locality', 'political'], 'address... OK
43 1136863 Khash+AF [{'formatted_address': 'Khash, Afghanistan', '... OK
44 1137168 Khanabad+AF [{'types': ['locality', 'political'], 'address... OK
45 1137807 Karukh+AF [{'formatted_address': 'Karokh, Afghanistan', ... OK
46 1138336 Kandahar+AF [{'types': ['locality', 'political'], 'address... OK
47 1138958 Kabul+AF [{'types': ['locality', 'political'], 'address... OK
48 1139715 Jalalabad+AF [{'types': ['locality', 'political'], 'address... OK
49 1139807 Jabal+os+Saraj+AF [{'types': ['locality', 'political'], 'address... OK