Python之Pandas:pandas.read_csv()函数的简介、具体案例、使用方法详细攻略

Python之Pandas:pandas.read_csv()函数的简介、具体案例、使用方法详细攻略read_csv()函数的简介read_csv函数,不仅可以读取csv文件,同样可以直接读入txt文件(默认读取逗号间隔内容的txt文件)。pd.read_csv('data.csv') pandas.read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal='.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, dialect=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)[source]¶源代码文档:https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html?highlight=read_csv#pandas.read_csvpandas.read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal='.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, dialect=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)[source]Read a comma-separated values (csv) file into DataFrame.Also supports optionally iterating or breaking of the file into chunks.Additional help can be found in the online docs for IO Tools.将逗号分隔值(csv)文件读入DataFrame。还支持可选地迭代或将文件分解成块。更多的帮助可以在IO工具的在线文档中找到。Parametersfilepath_or_buffer:   str, path object or file-like objectAny valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.csv.If you want to pass in a path object, pandas accepts any os.PathLike.By file-like object, we refer to objects with a read() method, such as a file handler (e.g. via builtin open function) or StringIO.sep:   str, default ',’Delimiter to use. If sep is None, the C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator by Python’s builtin sniffer tool, csv.Sniffer. In addition, separators longer than 1 character and different from '\s+' will be interpreted as regular expressions and will also force the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex example: '\r\t'.delimiter:   str, default NoneAlias for sep.header:   int, list of int, default 'infer’Row number(s) to use as the column names, and the start of the data. Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0 and column names are inferred from the first line of the file, if column names are passed explicitly then the behavior is identical to header=None. Explicitly pass header=0 to be able to replace existing names. The header can be a list of integers that specify row locations for a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the first line of data rather than the first line of the file.names:   array-like, optionalList of column names to use. If the file contains a header row, then you should explicitly pass header=0 to override the column names. Duplicates in this list are not allowed.index_col:   int, str, sequence of int / str, or False, default NoneColumn(s) to use as the row labels of the DataFrame, either given as string name or column index. If a sequence of int / str is given, a MultiIndex is used.Note: index_col=False can be used to force pandas to not use the first column as the index, e.g. when you have a malformed file with delimiters at the end of each line.usecols:   list-like or callable, optionalReturn a subset of the columns. If list-like, all elements must either be positional (i.e. integer indices into the document columns) or strings that correspond to column names provided either by the user in names or inferred from the document header row(s). For example, a valid list-like usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To instantiate a DataFrame from data with element order preserved use pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns in ['foo', 'bar'] order orpd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for ['bar', 'foo'] order.If callable, the callable function will be evaluated against the column names, returning names where the callable function evaluates to True. An example of a valid callable argument would be lambda x: x.upper() in ['AAA', 'BBB', 'DDD']. Using this parameter results in much faster parsing time and lower memory usage.squeeze:   bool, default FalseIf the parsed data only contains one column then return a Series.prefix:   str, optionalPrefix to add to column numbers when no header, e.g. 'X’ for X0, X1, …mangle_dupe_cols:   bool, default TrueDuplicate columns will be specified as 'X’, 'X.1’, …’X.N’, rather than 'X’…’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.dtype:   Type name or dict of column -> type, optionalData type for data or columns. E.g. {'a’: np.float64, 'b’: np.int32, 'c’: 'Int64’} Use str or objecttogether with suitable na_values settings to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.engine:   {'c’, 'python’}, optionalParser engine to use. The C engine is faster while the python engine is currently more feature-complete.converters:   dict, optionalDict of functions for converting values in certain columns. Keys can either be integers or column labels.true_values:   list, optionalValues to consider as True.参数filepath_or_buffer: str,路径对象或类文件对象任何有效的字符串路径都可以接受。字符串可以是URL。有效的URL方案包括http、ftp、s3、gs和file。对于文件url,需要一个主机。本地文件可以是:file://localhost/path/to/table.csv。如果您想传入一个path对象,pandas会接受任何类似os. path的东西。通过类文件对象,我们使用read()方法引用对象,比如文件处理程序(例如通过内置的open函数)或StringIO。sep:   str,默认','。分隔符使用。如果sep为None,则C引擎无法自动检测分隔符,但Python解析引擎可以,这意味着将使用Python内置的嗅探工具css . sniffer自动检测分隔符。此外,长度超过1个字符且与'\s+'不同的分隔符将被解释为正则表达式,还将强制使用Python解析引擎。注意,正则表达式分隔符容易忽略引用的数据。正则表达式的例子:“\ r \ t”。sep = '\t'     # 自定义分隔符。delimiter:   str,默认无,sep的别名。header:   int, int列表,默认' infer '用作列名的行号和数据的开头。默认行为是推断列名:如果没有传递名称,行为与header=0相同,并且从文件的第一行推断列名,如果列名是显式传递的,那么行为与header=None相同。显式传递header=0可以替换现有的名称。标头可以是一个整数列表,为列(如[0,1,3])上的多索引指定行位置。未指定的中间行将被跳过(例如,本例中跳过了2)。注意,如果skip_blank_lines=True,此参数将忽略注释行和空行,因此header=0表示数据的第一行,而不是文件的第一行。header=None        #取消读取csv或txt时默认第一行为列名。names:    数组类,可选的要使用的列名的列表。如果文件包含标题行,那么应该显式传递header=0以覆盖列名。此列表中不允许重复。index_col:   int, str, int / str序列,或False,默认无作为DataFrame的行标签的列,以字符串名称或列索引的形式给出。如果给定一个int / str序列,则使用一个多索引。注意:index_col=False可以用来强制panda不使用第一列作为索引,例如,当你有一个格式不正确的文件,每行末尾都有分隔符时。usecols:   类似列表或可调用,可选返回列的子集。如果类似列表,所有元素必须是位置的(即文档列中的整数索引)或字符串,这些字符串对应于用户在名称中提供的列名或从文档头行推断出来的列名。例如,一个有效的类似列表的usecols参数应该是[0,1,2]或['foo', 'bar', 'baz']。元素的顺序被忽略,因此usecols=[0,1]与[1,0]相同。从数据和实例化一个DataFrame元素顺序保存使用pd.read_csv(数据,usecols =[“foo”、“酒吧”])[[“foo”、“酒吧”]]的列(“foo”、“酒吧”)秩序orpd.read_csv(数据,usecols =[“foo”、“酒吧”])[[“酒吧”,“foo”]](“酒吧”,“foo”)的订单。如果可调用,可调用函数将根据列名计算,返回可调用函数计算值为True的名称。一个有效的可调用参数的例子是['AAA', 'BBB', 'DDD']中的lambda x: x.upper()。使用此参数会导致更快的解析时间和更低的内存使用。squeeze:    bool,默认为False如果解析后的数据只包含一列,则返回一个序列。前缀:str,可选的没有标题时添加到列号的前缀,例如:' X '表示X0, X1,…mangle_dupe_cols:  bool,默认True重复列将被指定为' X ', ' X。1 ',…”X。是N,而不是X,是X。如果列中有重复的名称,传入False将导致数据被覆盖。dtype:    列->类型的类型名称或dict,可选数据或列的数据类型,可在读入的时候指定数据类型。例如{a: np。使用str或objectwith适当的na_values设置来保存而不是解释dtype。如果指定了转换器,则将应用它们而不是dtype转换。engine:    {' c ', ' python '},可选要使用的解析器引擎。C引擎更快,而python引擎目前功能更完善。converters:   dict类型,可选的用于转换某些列中的值的函数的字典。键可以是整数或列标签。true_values:    列表,可选的认为是True的值。false_values:   list, optionalValues to consider as False.skipinitialspace:   bool, default FalseSkip spaces after delimiter.skiprows:   list-like, int or callable, optionalLine numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file.If callable, the callable function will be evaluated against the row indices, returning True if the row should be skipped and False otherwise. An example of a valid callable argument would be lambda x: x in [0, 2].skipfooter:   int, default 0Number of lines at bottom of file to skip (Unsupported with engine=’c’).nrows:   int, optionalNumber of rows of file to read. Useful for reading pieces of large files.na_values:   scalar, str, list-like, or dict, optionalAdditional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. By default the following values are interpreted as NaN: '’, '#N/A’, '#N/A N/A’, '#NA’, '-1.#IND’, '-1.#QNAN’, '-NaN’, '-nan’, '1.#IND’, '1.#QNAN’, '<NA>’, 'N/A’, 'NA’, 'NULL’, 'NaN’, 'n/a’, 'nan’, 'null’.keep_default_na:   bool, default TrueWhether or not to include the default NaN values when parsing the data. Depending on whether na_values is passed in, the behavior is as follows:If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values used for parsing.If keep_default_na is True, and na_values are not specified, only the default NaN values are used for parsing.If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are used for parsing.If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN.Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.na_filter:   bool, default TrueDetect missing value markers (empty strings and the value of na_values). In data without any NAs, passing na_filter=False can improve the performance of reading a large file.verbose:   bool, default FalseIndicate number of NA values placed in non-numeric columns.skip_blank_lines:   bool, default TrueIf True, skip over blank lines rather than interpreting as NaN values.parse_dates:   bool or list of int or names or list of lists or dict, default FalseThe behavior is as follows:boolean. If True -> try parsing the index.list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.dict, e.g. {'foo’ : [1, 3]} -> parse columns 1, 3 as date and call result 'foo’If a column or index cannot be represented as an array of datetimes, say because of an unparseable value or a mixture of timezones, the column or index will be returned unaltered as an object data type. For non-standard datetime parsing, use pd.to_datetimeafter pd.read_csv. To parse an index or column with a mixture of timezones, specify date_parser to be a partially-applied pandas.to_datetime() with utc=True. See Parsing a CSV with mixed timezones for more.Note: A fast-path exists for iso8601-formatted dates.infer_datetime_format:   bool, default FalseIf True and parse_dates is enabled, pandas will attempt to infer the format of the datetime strings in the columns, and if it can be inferred, switch to a faster method of parsing them. In some cases this can increase the parsing speed by 5-10x.keep_date_col:   bool, default FalseIf True and parse_dates specifies combining multiple columns then keep the original columns.date_parser:   function, optionalFunction to use for converting a sequence of string columns to an array of datetime instances. The default uses dateutil.parser.parser to do the conversion. Pandas will try to call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more strings (corresponding to the columns defined by parse_dates) as arguments.dayfirst:   bool, default FalseDD/MM format dates, international and European format.false_values:  列表,可选的要考虑为假的值。skipinitialspace:   bool,默认为False跳过分隔符后面的空格。skiprows:   类列表,int或可调用,可选文件开头要跳过的行号(0索引)或要跳过的行数(int)。如果可调用,callable函数将根据行索引计算,如果跳过该行,则返回True,否则返回False。一个有效的可调用参数的例子是[0,2]中的lambda x: x。skipfooter:  int,默认0文件底部要跳过的行数(engine= ' c '不支持)。nrows:    int,可选的要读取的文件行数。对读取大文件很有用。测试样本的时候nrows=2000,只选取前2000行数据!!na_values:    标量、str、类列表或dict,可选。需要识别为NA/NaN的附加字符串。如果dict通过,则指定每列的NA值。默认情况下,以下值被解释为NaN:   '’, '#N/A’, '#N/A N/A’, '#NA’, '-1.#IND’, '-1.#QNAN’, '-NaN’, '-nan’, '1.#IND’, '1.#QNAN’, '<NA>’, 'N/A’, 'NA’, 'NULL’, 'NaN’, 'n/a’, 'nan’, 'null’.na_values = ['NA']     # 可指定填充的内容keep_default_na:  bool,默认为真解析数据时是否包含默认的NaN值。根据是否传入na_values,行为如下:如果keep_default_na为真,并且指定了na_values,那么na_values将附加到用于解析的缺省NaN值中。如果keep_default_na为真,并且没有指定na_values,则只使用默认的NaN值进行解析。如果keep_default_na为False,并且指定了na_values,则仅使用指定na_values的NaN值进行解析。如果keep_default_na为False,并且没有指定na_values,则不会将任何字符串解析为NaN。注意,如果将na_filter作为False传入,则keep_default_na和na_values参数将被忽略。na_filter:   bool,默认为真检测缺失的值标记(空字符串和na_values的值)。在没有NAs的数据中,传递na_filter=False可以提高读取大文件的性能。verbose:   bool,默认为False指示放置在非数字列中的NA值的数目。skip_blank_lines:  bool,默认为真如果为真,跳过空行,而不是解释为NaN值。parse_dates:    bool,或int列表,或列表,或dict字典的列表,默认为False。parse_dates = ['joined']           # parse_dates参数转换为日期格式字段其行为如下:bool:如果为真——>尝试解析索引。list of int or names:例如,If[1,2,3] ->尝试将1,2,3列分别解析为一个单独的日期列。list of lists:例如,If[[1,3]] ->组合列1和3并解析为单个日期列。dict:例如{' foo ':[1,3]} ->解析列1,3作为日期并调用结果' foo '如果不能将列或索引表示为日期时间数组,例如由于值不可解析或时区混合,则列或索引将作为对象数据类型不变地返回。对于非标准的日期时间解析,在pd.read_csv之后使用pd. to_datetime。要解析混合时区的索引或列,请将date_parser指定为部分应用的pandas.to_datetime(),并使用 utc=True。有关更多信息,请参见Parsing a CSV with mixed timezones 。注意:有一个用于iso8601格式的日期的快速路径。iinfer_datetime_format:    bool,默认为False如果为真且启用了parse_date, pandas将尝试推断出列中datetime字符串的格式,如果可以推断出来,则切换到更快的解析方法。在某些情况下,这可以提高解析速度5-10倍。keep_date_col:    bool,默认为False如果为真,并且parse_date指定合并多列,则保留原始列。date_parser:  功能,可选的函数,用于将字符串列序列转换为日期时间实例数组。默认使用dateutil.parser。解析器执行转换。Pandas 将尝试以三种不同的方式调用date_parser,如果出现异常,则继续调用:1)传递一个或多个数组(由parse_date定义)作为参数;2)将parse_date定义的列中的字符串值连接到一个数组中并传递它;使用一个或多个字符串(对应于parse_date定义的列)作为参数,对每一行调用date_parser一次。dayfirst:   bool,默认为FalseDD/MM格式日期,国际和欧洲格式。cache_dates:   bool, default TrueIf True, use a cache of unique, converted dates to apply the datetime conversion. May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets.New in version 0.25.0.iterator:   bool, default FalseReturn TextFileReader object for iteration or getting chunks with get_chunk().chunksize:   int, optionalReturn TextFileReader object for iteration. See the IO Tools docs for more information on iterator and chunksize.compression:   {'infer’, 'gzip’, 'bz2’, 'zip’, 'xz’, None}, default 'infer’For on-the-fly decompression of on-disk data. If 'infer’ and filepath_or_buffer is path-like, then detect compression from the following extensions: '.gz’, '.bz2’, '.zip’, or '.xz’ (otherwise no decompression). If using 'zip’, the ZIP file must contain only one data file to be read in. Set to None for no decompression.thousands:   str, optionalThousands separator.decimal:   str, default '.’Character to recognize as decimal point (e.g. use ',’ for European data).lineterminator:   str (length 1), optionalCharacter to break file into lines. Only valid with C parser.quotechar:   str (length 1), optionalThe character used to denote the start and end of a quoted item. Quoted items can include the delimiter and it will be ignored.quoting:   int or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).doublequote:   bool, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE, indicate whether or not to interpret two consecutive quotechar elements INSIDE a field as a single quotecharelement.escapechar:   str (length 1), optionalOne-character string used to escape other characters.comment:   str, optionalIndicates remainder of line should not be parsed. If found at the beginning of a line, the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long as skip_blank_lines=True), fully commented lines are ignored by the parameter header but not by skiprows. For example, if comment='#', parsing #empty\na,b,c\n1,2,3 with header=0 will result in 'a,b,c’ being treated as the header.encoding:   str, optionalEncoding to use for UTF when reading/writing (ex. 'utf-8’). List of Python standard encodings .dialect:   str or csv.Dialect, optionalIf provided, this parameter will override values (default or not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting. If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect documentation for more details.error_bad_lines:   bool, default TrueLines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines” will dropped from the DataFrame that is returned.warn_bad_lines:   bool, default TrueIf error_bad_lines is False, and warn_bad_lines is True, a warning for each “bad line” will be output.delim_whitespace:   bool, default FalseSpecifies whether or not whitespace (e.g. ' ' or '    ') will be used as the sep. Equivalent to setting sep='\s+'. If this option is set to True, nothing should be passed in for the delimiter parameter.low_memory:   bool, default TrueInternally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize or iterator parameter to return the data in chunks. (Only valid with C parser).memory_map:   bool, default FalseIf a filepath is provided for filepath_or_buffer, map the file object directly onto memory and access the data directly from there. Using this option can improve performance because there is no longer any I/O overhead.float_precision:  str, optionalSpecifies which converter the C engine should use for floating-point values. The options are None for the ordinary converter, high for the high-precision converter, and round_tripfor the round-trip converter.cache_dates:   bool,默认为真如果为真,则使用唯一已转换日期的缓存来应用日期时间转换。可能在解析重复的日期字符串时产生显著的加速,特别是那些具有时区偏移量的日期字符串。新版本0.25.0。iterator:    bool,默认为False返回TextFileReader对象用于迭代或使用get_chunk()获取块。chunksize:    int,可选的返回TextFileReader对象进行迭代。有关iterator和chunksize的更多信息,请参阅IO工具文档。compression:   {“推断”,gzip, bz2的获取,“邮政编码”,“xz”,没有},默认“推断”用于对磁盘上的数据进行动态解压缩。如果' infer '和filepath_or_buffer是类路径的,那么从以下扩展名检测压缩:'。广州“,”。bz2”、“获取。邮政“,或”。(否则不解压)。如果使用' zip ', zip文件必须只包含一个数据文件来读入。设置为None表示不进行解压缩。thousands:    str,可选的成千上万的分隔符。decimal:    str,默认为'。'可识别为小数点的字符(例如,对于欧洲数据使用',')。lineterminator:  str(长度1),可选用于将文件分成几行的字符。仅对C解析器有效。quotechar:   str(长度1),可选用于表示引用项的开始和结束的字符。引用的项可以包含分隔符,它将被忽略。quoting:   nt或csv。QUOTE_* instance,默认为0控制字段引用行为每个css . quote_ *常量。使用QUOTE_MINIMAL(0)、QUOTE_ALL(1)、QUOTE_NONNUMERIC(2)或QUOTE_NONE(3)中的一个。doublequote:   bool,默认为True当quotechar被指定并且引号不是QUOTE_NONE时,指示是否将字段内的两个连续的quotechar元素解释为单个quotecharelement。escapechar:   str(长度1),可选用于转义其他字符的单字符字符串。comment:     str,可选的指示不应解析行的余数。如果在一行的开头找到,这一行将被完全忽略。此参数必须为单个字符。与空行(只要skip_blank_lines=True)一样,完全注释的行会被参数头忽略,但不会被skiprows忽略。例如,如果注释='#',用header=0解析#empty\na,b,c\n1,2,3将导致将' a,b,c '作为header处理。encoding:     str,可选的。读取/写入UTF时使用的编码(例如' UTF -8 ')。Python标准编码列表。如果读取的csv文件中,输出的时候遇到中文乱码,则需要加 encoding='utf-8'dialect:    str或csv。Dialect,可选如果提供,该参数将覆盖以下参数的值(默认值或非默认值):delimiter、doublequote、escapechar、skipinitialspace、quotechar和quotes。如果有必要重写值,则会发出ParserWarning。看到csv。方言文档了解更多细节。error_bad_lines:     bool,默认为真。字段太多的行(例如,csv行有太多逗号)默认情况下会引发异常,并且不会返回DataFrame。如果为False,那么这些“坏行”将从返回的DataFrame中删除。warn_bad_lines:    bool,默认为True。如果error_bad_lines为False,而warn_bad_lines为True,则将为每个“坏行”输出一个警告。delim_whitespace:   bool,默认为False指定是否使用空白(例如' '或' ')作为sep.等价于设置sep='\s+'。如果将此选项设置为True,则不应该为分隔符参数传递任何内容。low_memory:  bool,默认为True在内部以块的形式处理文件,导致在解析时使用更低的内存,但可能是混合类型推断。确保没有混合类型设置为False,或使用dtype参数指定类型。注意,整个文件被读入一个单独的DataFrame中,使用chunksize或iterator参数以块的形式返回数据。(仅对C解析器有效)。memory_map:   bool,默认为False如果为filepath_or_buffer提供了一个filepath,则将该文件对象直接映射到内存并从那里直接访问数据。使用此选项可以提高性能,因为不再有任何I/O开销。float_precision:   str,可选指定C引擎应该为浮点值使用哪个转换器。普通转换器为None,高精度转换器为high,往返转换器为round_trip。ReturnsDataFrame or TextParserA comma-separated values (csv) file is returned as two-dimensional data structure with labeled axes.DataFrame或TextParser以逗号分隔的值(csv)文件被返回为带有标记轴的二维数据结构。

(0)

相关推荐