分享

Intel RealSense SDK2.0 使用

 云深无际 2022-01-12

Intel RealSense D430 详解

Intel RealSense 相机介绍.上

Intel R200相机开机与简评

Intel R200 深度相机开发.1

Intel R200 深度相机开发.2

Intel R200 深度相机开发.3

Intel R200 深度相机开发.4

闲鱼 159元冲全新Intel R200实感相机

因为D430是可以使用SDK2.0 的,我终于可以研究这个SDK了,好不容易啊。

支持D430,嘤嘤嘤。然后Intel觉得这个产品线不值钱,就不开发了,真是遗憾。

由此开始我们的SDK2.0学习之旅。

提供了一些方便的开发工具

可以快速查看,以及评估图像质量,对设备本身的固件管理等。

除了EXE文件,还有一些源码

界面是使用IMGUI构建的:

使用IMGUI构建的复杂GUI

还有GLFW:

支持Python接口

D400系列的相机可以开启一些高级的功能,一旦开启就自动的保存在闪存中,开机就自动启动

GitHub上面的Wiki是我们感兴趣的地方,甚至文档还很好。

可以及时的去发issue,回复的很及时。

rs::context ctx;if (ctx.get_device_count() == 0) throw std::runtime_error("No device detected. Is it plugged in?");rs::device & dev = *ctx.get_device(0);

查询相机是不是正常连接,这是SDK1.0 的写法。

rs2::context ctx;auto list = ctx.query_devices(); // Get a snapshot of currently connected devicesif (list.size() == 0) throw std::runtime_error("No device detected. Is it plugged in?");rs2::device dev = list.front();

这是2.0的写法

rs2::pipeline pipe;pipe.start();

开始数据传输

rs2::config cfg;cfg.enable_stream(RS2_STREAM_INFRARED, 0);cfg.enable_stream(RS2_STREAM_INFRARED, 1);rs2::pipeline pipe;pipe.start(cfg);

启动左右的深度摄像头

rs2::pipeline pipe;pipe.start();rs2::frameset frames = pipe.wait_for_frames();rs2::frame frame = frames.first(RS2_STREAM_DEPTH);if (frame) frame.get_data(); // Pointer to depth pixels, // invalidated when last copy of frame goes out of scope

启动帧同步

rs2::pipeline pipe;pipe.start();rs2::frameset frames;if (pipe.poll_for_frames(&frames)){ rs2::frame depth_frame = frames.first(RS2_STREAM_DEPTH); depth_frame.get_data();}

使用轮询的方式获得帧

rs2::pipeline pipe;pipe.start();
const auto CAPACITY = 5; // allow max latency of 5 framesrs2::frame_queue queue(CAPACITY);std::thread t([&]() { while (true) { rs2::depth_frame frame; if (queue.poll_for_frame(&frame)) { frame.get_data(); // Do processing on the frame } }});t.detach();
while (true){ auto frames = pipe.wait_for_frames(); queue.enqueue(frames.get_depth_frame());}

使用线程获得帧

rs2::pipeline pipe;rs2::pipeline_profile selection = pipe.start();auto depth_stream = selection.get_stream(RS2_STREAM_DEPTH);auto color_stream = selection.get_stream(RS2_STREAM_COLOR);rs2_extrinsics e = depth_stream.get_extrinsics_to(color_stream);// Apply extrinsics to the originfloat origin[3] { 0.f, 0.f, 0.f };float target[3];rs2_transform_point_to_point(target, &e, origin);

获得内参

rs2::pipeline pipe;rs2::pipeline_profile selection = pipe.start();auto depth_stream = selection.get_stream(RS2_STREAM_DEPTH) .as<rs2::video_stream_profile>();auto resolution = std::make_pair(depth_stream.width(), depth_stream.height());auto i = depth_stream.get_intrinsics();auto principal_point = std::make_pair(i.ppx, i.ppy);auto focal_length = std::make_pair(i.fx, i.fy);rs2_distortion model = i.model;

获得视频流的内参

rs2::pipeline pipe;rs2::pipeline_profile selection = pipe.start();auto depth_stream = selection.get_stream(RS2_STREAM_DEPTH) .as<rs2::video_stream_profile>();auto i = depth_stream.get_intrinsics();float fov[2]; // X, Y fovrs2_fov(&i, fov);

获得FOV参数

rs2::pipeline pipe;rs2::pipeline_profile selection = pipe.start();
// Find first depth sensor (devices can have zero or more then one)auto sensor = selection.get_device().first<rs2::depth_sensor>();auto scale = sensor.get_depth_scale();

获得深度的单位

rs2::pipeline pipe; rs2::pipeline_profile selection = pipe.start();rs2::device selected_device = selection.get_device();auto depth_sensor = selected_device.first<rs2::depth_sensor>();
if (depth_sensor.supports(RS2_OPTION_EMITTER_ENABLED)){ depth_sensor.set_option(RS2_OPTION_EMITTER_ENABLED, 1.f); // Enable emitter depth_sensor.set_option(RS2_OPTION_EMITTER_ENABLED, 0.f); // Disable emitter}if (depth_sensor.supports(RS2_OPTION_LASER_POWER)){ // Query min and max values: auto range = depth_sensor.get_option_range(RS2_OPTION_LASER_POWER); depth_sensor.set_option(RS2_OPTION_LASER_POWER, range.max); // Set max power depth_sensor.set_option(RS2_OPTION_LASER_POWER, 0.f); // Disable laser}

SDK可以自由的按需要编译,使用Cmake控制编译过程。

# First import the libraryimport pyrealsense2 as rs
# Create a context object. This object owns the handles to all connected realsense devicespipeline = rs.pipeline()pipeline.start()
try: while True: # Create a pipeline object. This object configures the streaming camera and owns it's handle frames = pipeline.wait_for_frames() depth = frames.get_depth_frame() if not depth: continue
# Print a simple text-based representation of the image, by breaking it into 10x20 pixel regions and approximating the coverage of pixels within one meter coverage = [0]*64 for y in xrange(480): for x in xrange(640): dist = depth.get_distance(x, y) if 0 < dist and dist < 1: coverage[x/10] += 1
if y%20 is 19: line = "" for c in coverage: line += " .:nhBXWW"[c/25] coverage = [0]*64 print(line)
finally: pipeline.stop()

Python接口使用起来很简单

import numpy as npdepth = frames.get_depth_frame()depth_data = depth.as_frame().get_data()np_image = np.asanyarray(depth_data)

SDK还支持缓冲协议,所以使用Numpy的性能会更好。

function depth_example() % Make Pipeline object to manage streaming pipe = realsense.pipeline(); % Make Colorizer object to prettify depth output colorizer = realsense.colorizer();
% Start streaming on an arbitrary camera with default settings profile = pipe.start();
% Get streaming device's name dev = profile.get_device(); name = dev.get_info(realsense.camera_info.name);
% Get frames. We discard the first couple to allow % the camera time to settle for i = 1:5 fs = pipe.wait_for_frames(); end % Stop streaming pipe.stop();
% Select depth frame depth = fs.get_depth_frame(); % Colorize depth frame color = colorizer.colorize(depth);
% Get actual data and convert into a format imshow can use % (Color data arrives as [R, G, B, R, G, B, ...] vector) data = color.get_data(); img = permute(reshape(data',[3,color.get_width(),color.get_height()]),[3 2 1]);
% Display image imshow(img); title(sprintf("Colorized depth frame from %s", name));end

对于一个搞数学建模的人,对MATLAB有莫名的好感。这里官方也简单的左了MATLAB的包装。

Openni2也是支持的,不过需要自己把底层的驱动做修改。

下载Openni2

一个八九年前的库居然生命力这么顽强。

CMake里面需要添加这些源码进行编译

这些就是源码,等我CPP学的屌屌的,我就写源码分析。

构建SDK的推荐配置

视觉应用难免会遇到提取前景这种需求,GrabCut这个算法一般是经常使用的

这个算法的来源是OpenCV

这里SDK提供了深度的前景提取算法

z

https://github.com/IntelRealSense/librealsense
https://github.com/ocornut/imgui
https://www.glfw.org/community.html#bindings
https://github.com/IntelRealSense/librealsense/wiki/Troubleshooting-Q&A
https://structure.io/openni
https://docs.opencv.org/3.4/d8/d83/tutorial_py_grabcut.html

    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多