2021啦,这12种Numpy&Pandas高效技巧还没掌握吗?
选自|TowardsDataScience 作者|Kunal Dhariwal
报道|机器之心
index_val
array([1, 8, 2, 0], dtype=int64)np.sort(x[index_val])
array([10, 12, 12, 16])
array2 = np.array([0.13,0.19,0.26,0.31])# with a tolerance of 0.1, it should return False:
np.allclose(array1,array2,0.1)
False# with a tolerance of 0.2, it should return True:
np.allclose(array1,array2,0.2)
True
array([3, 5, 5, 5, 2, 2, 5, 5, 2, 2, 5, 2])
array = np.random.randint(20, size=12)
array
array([ 0, 1, 8, 19, 16, 18, 10, 11, 2, 13, 14, 3])# Divide by 2 and check if remainder is 1
cond = np.mod(array, 2)==1
cond
array([False, True, False, True, False, False, False, True, False, True, False, True])# Use extract to get the values
np.extract(cond, array)
array([ 1, 19, 11, 13, 3])# Apply condition on extract directly
np.extract(((array < 3) | (array > 15)), array)
array([ 0, 1, 19, 16, 18, 2])
np.where(y>5)
array([2, 3, 5, 7, 8], dtype=int64),)# First will replace the values that match the condition,
# second will replace the values that does not
np.where(y>5, 'Hit', 'Miss')
array([ Miss , Miss , Hit , Hit , Miss , Hit , Miss , Hit , Hit ],dtype= <U4 )
np.percentile(a, 50, axis =0))
50th Percentile of a, axis = 0 : 6.0b = np.array([[10, 7, 4], [3, 2, 1]])print('30th Percentile of b, axis = 0 : ',
np.percentile(b, 30, axis =0))
30th Percentile of b, axis = 0 : [5.1 3.5 1.9]
具有异构类型列的表格数据,如 SQL 表或 Excel 表;
有序和无序 (不一定是固定频率) 的时间序列数据;
带有行/列标签的任意矩阵数据(同构类型或者是异构类型);
其他任意形式的统计数据集。事实上,数据根本不需要标记就可以放入 Pandas 结构中。
容易处理浮点数据和非浮点数据中的 缺失数据(用 NaN 表示);
大小可调整性: 可以从 DataFrame 或者更高维度的对象中插入或者是删除列;
显式数据可自动对齐: 对象可以显式地对齐至一组标签内,或者用户可以简单地选择忽略标签,使 Series、 DataFrame 等自动对齐数据;
灵活的分组功能,对数据集执行拆分-应用-合并等操作,对数据进行聚合和转换;
简化将数据转换为 DataFrame 对象的过程,而这些数据基本是 Python 和 NumPy 数据结构中不规则、不同索引的数据;
基于标签的智能切片、索引以及面向大型数据集的子设定;
更加直观地合并以及连接数据集;
更加灵活地重塑、转置(pivot)数据集;
轴的分级标记 (可能包含多个标记);
具有鲁棒性的 IO 工具,用于从平面文件 (CSV 和 delimited)、 Excel 文件、数据库中加在数据,以及从 HDF5 格式中保存 / 加载数据;
时间序列的特定功能: 数据范围的生成以及频率转换、移动窗口统计、数据移动和滞后等。
import requests# I am using this online data set just to make things easier for you guys
url = 'https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/datasets/AirPassengers.csv'
s = requests.get(url).content# read only first 10 rows
df = pd.read_csv(io.StringIO(s.decode( utf-8 )),nrows=10 , index_col=0)
dframe = pd.DataFrame(np.random.randn(4, 3), columns=list( bde ), index=[ India , USA , China , Russia ])#compute a formatted string from each floating point value in frame
changefn = lambda x: %.2f % x# Make changes element-wise
dframe[ d ].map(changefn)
fn = lambda x: x.max() - x.min()# Apply this on dframe that we ve just created above
dframe.apply(fn)
filter1 = df['value'].isin([112])
filter2 = df['time'].isin([1949.000000])df [filter1 & filter2]
data = pd.Series([ India , Pakistan , China , Mongolia ])# Assigning issue that we face
data1= data
# Change a value
data1[0]= USA
# Also changes value in old dataframe
data# To prevent that, we use
# creating copy of series
new = data.copy()# assigning new values
new[1]= Changed value # printing data
print(new)
print(data)
framex = df.select_dtypes(include='float64')# Returns only time column
school = pd.DataFrame({ A : [ Jay , Usher , Nicky , Romero , Will ],
B : [ Masters , Graduate , Graduate , Masters , Graduate ],
C : [26, 22, 20, 23, 24]})# Lets create a pivot table to segregate students based on age and course
table = pd.pivot_table(school, values = A , index =[ B , C ],
columns =[ B ], aggfunc = np.sum, fill_value='Not Available')
table
赞 (0)