关于RGBD相机选型(奥比中光)

这篇文章有两个星期了,我忘了发了,是关于一些选型的资料,这里做一点小整理发到这里。

因为参加了奥比中光和英伟达联合举办的三维相机比赛,然后现在要拿方案选型,所以这里就做个记录,资料来源于官网和互联网。

https://www.orbbec.com.cn/

主要是这两个产品

以及解决方案

orbbec.com.cn/sys/37.html

主要是现在有两个大类,一个是完整的开发,一个嵌入到产品,甚至是集成到手机内部。

这里是所有的相机,都是卖1000

上面是Astra相机的几个产品,是一开始开发前的相机

就是做项目的验证时候使用,在投入量产的时候不会用

我大致做了一下思维导图

https://developer.orbbec.com.cn/module.html?id=6

这个页面隐藏的比较深,是后面几个相机的页面图

这个页面和大疆的有点像

我首先选择的是USB3.0的快速接口,这个就过滤了几个相机。

这个精度就蛮好了,不需要3mm的

所以在deeyea和Gemini里面选择

所有的参数,对我的项目可以说是很合适了

外观也好看,还有Type-C的接口

正视图

特别的还使用到了主动的双目结构光方案

我还没有试过这个方案

组装时候的爆炸图

一些特性

https://github.com/orbbec/

各种库,官方的

https://github.com/search?l=Python&q=orbbec&type=Repositories
https://github.com/bensnell/orbbec-astra

一个读取使用的Python库

https://github.com/rlabbe/filterpy
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/

笔记一角

两个滤波库,因为噪音太大了

https://filterpy.readthedocs.io/en/latest/

卡尔曼的Python实现库

import numpy as npfrom openni import openni2from openni import _openni2import cv2 as cvimport open3dimport copyimport time

SAVE_POINTCLOUDS = False

def get_rgbd(color_capture, depth_stream, depth_scale=1000, depth_trunc=4, convert_rgb_to_intensity=False):
# Get color image _, color_image = color_capture.read() color_image = color_image[:, ::-1, ::-1]
# Get depth image depth_frame = depth_stream.read_frame() depth_image = np.frombuffer(depth_frame.get_buffer_as_uint16(), np.uint16) depth_image = depth_image.reshape(depth_frame.height, depth_frame.width) depth_image = depth_image.astype(np.float32)
# Create rgbd image from depth and color color_image = np.ascontiguousarray(color_image) depth_image = np.ascontiguousarray(depth_image) rgbd = open3d.geometry.RGBDImage.create_from_color_and_depth( open3d.geometry.Image(color_image), open3d.geometry.Image(depth_image), depth_scale=depth_scale, depth_trunc=depth_trunc, convert_rgb_to_intensity=convert_rgb_to_intensity )
return rgbd

def main(): # Init openni openni_dir = "../OpenNI_2.3.0.55/Linux/OpenNI-Linux-x64-2.3.0.55/Redist" openni2.initialize(openni_dir)
# Open astra color stream (using opencv) color_capture = cv.VideoCapture(2) color_capture.set(cv.CAP_PROP_FPS, 5)
# Open astra depth stream (using openni) depth_device = openni2.Device.open_any() depth_stream = depth_device.create_depth_stream() depth_stream.start() depth_stream.set_video_mode( _openni2.OniVideoMode(pixelFormat=_openni2.OniPixelFormat.ONI_PIXEL_FORMAT_DEPTH_1_MM, resolutionX=640, resolutionY=480, fps=5)) depth_device.set_image_registration_mode(openni2.IMAGE_REGISTRATION_DEPTH_TO_COLOR)
# Create pointcloud visualizer visualizer = open3d.visualization.Visualizer() visualizer.create_window("Pointcloud", width=1000, height=700)
# Camera intrinsics of the astra pro astra_camera_intrinsics = open3d.camera.PinholeCameraIntrinsic( width=640, height=480, fx=585, fy=585, cx=320, cy=250)
# Create initial pointcloud pointcloud = open3d.geometry.PointCloud() visualizer.add_geometry(pointcloud)
first = True prev_timestamp = 0 num_stored = 0
while True: rgbd = get_rgbd(color_capture, depth_stream)
# Convert images to pointcloud new_pointcloud = open3d.geometry.PointCloud.create_from_rgbd_image( rgbd, intrinsic=astra_camera_intrinsics, ) # Flip pointcloud new_pointcloud.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]]) # Set rendered pointcloud to recorded pointcloud pointcloud.points = new_pointcloud.points pointcloud.colors = new_pointcloud.colors
# Save pointcloud each n seconds if SAVE_POINTCLOUDS and time.time() > prev_timestamp + 5: filename = "pointcloud-%r.pcd" % num_stored open3d.io.write_point_cloud(filename, new_pointcloud) num_stored += 1 print("Stored: %s" % filename) prev_timestamp = time.time()
# Reset viewpoint in first frame to look at the scene correctly # (e.g. correct bounding box, direction, distance, etc.) if first: visualizer.reset_view_point(True) first = False
# Update visualizer visualizer.update_geometry() visualizer.poll_events() visualizer.update_renderer()

if __name__ == "__main__": main()

使用 Open3D 显示 Astra Pro Orbbec 的点云

文章写了很久了,很多东西都忘了,当笔记的使用了

(0)

相关推荐