kitti数据可视化

news/发布时间2024/5/15 14:44:04

数据下载

The KITTI Vision Benchmark Suite

 这里以 2011_09_26_drive_0005 (0.6 GB)数据为参考,下载[synced+rectified data] [calibration] 数据。

下载完毕之后解压,然后将calibration文件解压后的结果放在如下目录下,

 下载kitti2bag包

pip install kitti2bag

如果后期出现如下错误:

ImportError: dynamic module does not define module export function (PyInit__tf2)

 则执行如下重新安装

pip2 install kitti2bag

进入到2011_09_26文件夹的同级目录中,执行如下代码进行转换:

kitti2bag -t 2011_09_26 -r 0005 raw_synced

 运行截图如下:

运行结束之后,会在当前目录下生成一个bag文件。

 

启动ROS,并进行播放

roscore

进入到刚刚生成bag文件的目录下,执行如下代码:

rqt_bag kitti_2011_09_26_drive_0005_synced.bag 

 生成如下截图:

选择其中一个图像话题,右键,选择view -》选择image,然后点击play进行播放。

 发布图片

创建工作空间,并创建功能包

~/catkin_ws/src$ catkin_create_pkg kitti_tutorial rospy

编写kitti.py代码

#!/usr/bin/env pythonimport cv2
import osimport rospy
from sensor_msgs.msg import Image
from cv_bridge import CvBridgeDATA_PATH = "/home/d501/kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26/2011_09_26_drive_0005_sync"if __name__ == "__main__":frame = 0rospy.init_node("kitti_node", anonymous=True)cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)bridge = CvBridge()rate = rospy.Rate(10)while not rospy.is_shutdown():img = cv2.imread(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame))cam_pub.publish(bridge.cv2_to_imgmsg(img, 'bgr8'))rospy.loginfo("camera image published")rate.sleep()frame += 1frame %= 154

添加可执行权限

sudo chmod +x kitti.py

编译工作空间

~/catkin_ws$ catkin_make
~/catkin_ws$ source devel/setup.bash

在使用pycharm编写python程序时,

问题:在pycharm中使用ros时候,遇到“No module named rospy”

 pycharm中无法加载rospy_pycharm no module named rospy-CSDN博客

将解释器更换为opt下的python2.7

 启动rviz

rviz

启动ros节点

~/catkin_ws$ rosrun kitti_tutorial kitti.py

添加topic

如图所示: 

 发布点云数据

修改kitti.py代码

#!/usr/bin/env python
import numpy as npimport cv2
import os
import numpy as np
import rospy
from std_msgs.msg import Header
from sensor_msgs.msg import Image, PointCloud2
import sensor_msgs.point_cloud2 as pcl2
from cv_bridge import CvBridgeDATA_PATH = "/home/d501/kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26/2011_09_26_drive_0005_sync"if __name__ == "__main__":frame = 0rospy.init_node("kitti_node", anonymous=True)cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)pcl_pub = rospy.Publisher('kitti_point_cloud', PointCloud2, queue_size=10)bridge = CvBridge()rate = rospy.Rate(10)while not rospy.is_shutdown():img = cv2.imread(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame))point_cloud = np.fromfile(os.path.join(DATA_PATH, 'velodyne_points/data/%010d.bin'%frame), dtype=np.float32).reshape(-1, 4)cam_pub.publish(bridge.cv2_to_imgmsg(img, 'bgr8'))header = Header()header.stamp = rospy.Time.now()header.frame_id = 'map'pcl_pub.publish(pcl2.create_cloud_xyz32(header, point_cloud[:,:3]))rospy.loginfo("published")rate.sleep()frame += 1frame %= 154

重新添加topic之后,如下:

 调整size大小

将rviz存档,以后开启它就不用重新添加topic

点击File->Save Config As->选择包下的某个路径,保存为xxxx.rviz文件,下次启动时,使用rviz -d xxxx.rviz即可

添加照相机视野

重构代码data_utils.py

#!/usr/bin/env pythonimport cv2
import numpy as npdef read_camera(path):return cv2.imread(path)def read_point_cloud(path):return np.fromfile(path, dtype=np.float32).reshape(-1, 4)

publish_utils.py

#!/usr/bin/env pythonimport rospy
from geometry_msgs.msg import Point
from std_msgs.msg import Header
from sensor_msgs.msg import Image, PointCloud2
import sensor_msgs.point_cloud2 as pcl2
from cv_bridge import CvBridge
from visualization_msgs.msg import MarkerFRAME_ID = 'map'def publish_camera(cam_pub, bridge, image):cam_pub.publish(bridge.cv2_to_imgmsg(image, 'bgr8'))def publish_point_cloud(pcl_pub, point_cloud):header = Header()header.stamp = rospy.Time.now()header.frame_id = FRAME_IDpcl_pub.publish(pcl2.create_cloud_xyz32(header, point_cloud[:, :3]))def publish_ego_car(ego_car_pub):"""publish left and right 45 degree FOV lines and ego car model mesh:param ego_car_pub::return:"""marker = Marker()marker.header.frame_id = FRAME_IDmarker.header.stamp = rospy.Time.now()marker.id = 0marker.action = Marker.ADDmarker.lifetime = rospy.Duration()marker.type = Marker.LINE_STRIPmarker.color.r = 0.0marker.color.g = 1.0marker.color.b = 0.0marker.color.a = 1.0marker.scale.x = 0.2marker.points = []marker.points.append(Point(10, -10, 0))marker.points.append(Point(0, 0, 0))marker.points.append(Point(10, 10, 0))ego_car_pub.publish(marker)

kitti.py

#!/usr/bin/env python
import numpy as npimport rospyfrom data_utils import *
from publish_utils import *
import osDATA_PATH = "/home/d501/kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26/2011_09_26_drive_0005_sync"if __name__ == "__main__":frame = 0rospy.init_node("kitti_node", anonymous=True)cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)pcl_pub = rospy.Publisher('kitti_point_cloud', PointCloud2, queue_size=10)ego_pub = rospy.Publisher('kitti_ego_car', Marker, queue_size=10)bridge = CvBridge()rate = rospy.Rate(10)while not rospy.is_shutdown():image = read_camera(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame))point_cloud = read_point_cloud(os.path.join(DATA_PATH, 'velodyne_points/data/%010d.bin'%frame))publish_camera(cam_pub, bridge, image)publish_point_cloud(pcl_pub, point_cloud)publish_ego_car(ego_pub)rospy.loginfo("published")rate.sleep()frame += 1frame %= 154

 画出汽车模型

下载汽车模型Low-Poly Car Free 3D Model - .3ds .obj .dae .blend .fbx .mtl - Free3D

 解压之后,将Car.dae文件放入到包下的某个文件夹中,这里直接使用原生解压的文件夹目录。

在publish_utils.py中添加如下代码

def publish_car_model(car_model_pub):mech_marker = Marker()mech_marker.header.frame_id = "map"mech_marker.header.stamp = rospy.Time.now()# id必须不一样,且只能是整数mech_marker.id = -1mech_marker.type = Marker.MESH_RESOURCEmech_marker.action = Marker.ADDmech_marker.lifetime = rospy.Duration()mech_marker.mesh_resource = "package://kitti_tutorials/am7f3psa0agw-Car-Model/Car-Model/Car.dae"  # .dae模型文件的相对地址# 位置主要看官方提供的位置图# 因为自车的velodyne激光雷达相对于map的位置是(0,0,0),看设备安装图上velodyne的位置是(0,0,1.73),显而易见,在rviz中自车模型位置应该是(0,0,-1.73)mech_marker.pose.position.x = 0.0mech_marker.pose.position.y = 0.0mech_marker.pose.position.z = -1.73# 这边和原作者的代码不一样,因为tf结构发生了一些改变,四元数需要自己根据模型做响应的调整,笔者这边是调整好的模型四元数q = transformations.quaternion_from_euler(np.pi, np.pi, -np.pi / 2.0)mech_marker.pose.orientation.x = q[0]mech_marker.pose.orientation.y = q[1]mech_marker.pose.orientation.z = q[2]mech_marker.pose.orientation.w = q[3]mech_marker.color.r = 1.0mech_marker.color.g = 1.0mech_marker.color.b = 1.0mech_marker.color.a = 1.0  # 透明度# 设置车辆大小mech_marker.scale.x = 1.0mech_marker.scale.y = 1.0mech_marker.scale.z = 1.0car_model_pub.publish(mech_marker)

修改kitti.py代码

#!/usr/bin/env python
import numpy as npimport rospyfrom data_utils import *
from publish_utils import *
import osDATA_PATH = "/home/d501/kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26/2011_09_26_drive_0005_sync"if __name__ == "__main__":frame = 0rospy.init_node("kitti_node", anonymous=True)cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)pcl_pub = rospy.Publisher('kitti_point_cloud', PointCloud2, queue_size=10)ego_pub = rospy.Publisher('kitti_ego_car', Marker, queue_size=10)model_pub = rospy.Publisher('kitti_car_model', Marker, queue_size=10)bridge = CvBridge()rate = rospy.Rate(10)while not rospy.is_shutdown():image = read_camera(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame))point_cloud = read_point_cloud(os.path.join(DATA_PATH, 'velodyne_points/data/%010d.bin'%frame))publish_camera(cam_pub, bridge, image)publish_point_cloud(pcl_pub, point_cloud)publish_ego_car(ego_pub)publish_car_model(model_pub)rospy.loginfo("published")rate.sleep()frame += 1frame %= 154

重新运行之后,出现错误:

[ERROR] [1679018952.335224247]: Could not load resource [package://kitti_tutorials/meshes/Car.dae]: Unable to open file "package://kitti_tutorials/meshes/Car.dae".

[rospack] Error: package 'kitti_tutorial' not found

参考:rviz进行kitti数据集可视化时加载小车模型报错_rviz could not load resourceunable to open file-CSDN博客 

 使用MarkerArray进行发布

修改publish_utils.py

def publish_ego_car(ego_car_pub):"""publish left and right 45 degree FOV lines and ego car model mesh:param ego_car_pub::return:"""marker_array = MarkerArray()marker = Marker()marker.header.frame_id = FRAME_IDmarker.header.stamp = rospy.Time.now()marker.id = 0marker.action = Marker.ADDmarker.lifetime = rospy.Duration()marker.type = Marker.LINE_STRIPmarker.color.r = 0.0marker.color.g = 1.0marker.color.b = 0.0marker.color.a = 1.0marker.scale.x = 0.2marker.points = []marker.points.append(Point(10, -10, 0))marker.points.append(Point(0, 0, 0))marker.points.append(Point(10, 10, 0))marker_array.markers.append(marker)mech_marker = Marker()mech_marker.header.frame_id = "map"mech_marker.header.stamp = rospy.Time.now()mech_marker.id = -1mech_marker.type = Marker.MESH_RESOURCEmech_marker.action = Marker.ADDmech_marker.lifetime = rospy.Duration()mech_marker.mesh_resource = "package://kitti_tutorial/meshes/Car.dae"mech_marker.pose.position.x = 0.0mech_marker.pose.position.y = 0.0mech_marker.pose.position.z = -1.73q = transformations.quaternion_from_euler(np.pi, np.pi, -np.pi / 2.0)mech_marker.pose.orientation.x = q[0]mech_marker.pose.orientation.y = q[1]mech_marker.pose.orientation.z = q[2]mech_marker.pose.orientation.w = q[3]mech_marker.color.r = 1.0mech_marker.color.g = 1.0mech_marker.color.b = 1.0mech_marker.color.a = 1.0mech_marker.scale.x = 1.0mech_marker.scale.y = 1.0mech_marker.scale.z = 1.0marker_array.markers.append(mech_marker)ego_car_pub.publish(marker_array)

 kitti.py代码

#!/usr/bin/env python
import numpy as npimport rospyfrom data_utils import *
from publish_utils import *
import osDATA_PATH = "/home/d501/kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26/2011_09_26_drive_0005_sync"if __name__ == "__main__":frame = 0rospy.init_node("kitti_node", anonymous=True)cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)pcl_pub = rospy.Publisher('kitti_point_cloud', PointCloud2, queue_size=10)ego_pub = rospy.Publisher('kitti_ego_car', MarkerArray, queue_size=10)bridge = CvBridge()rate = rospy.Rate(10)while not rospy.is_shutdown():image = read_camera(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame))point_cloud = read_point_cloud(os.path.join(DATA_PATH, 'velodyne_points/data/%010d.bin'%frame))publish_camera(cam_pub, bridge, image)publish_point_cloud(pcl_pub, point_cloud)publish_ego_car(ego_pub)rospy.loginfo("published")rate.sleep()frame += 1frame %= 154

发布IMU数据

data_utils.py添加

IMU_NAME = ["lat", "lon", "alt", "roll", "pitch", "yaw", "vn", "ve", "vf", "vl", "vu", "ax", "ay", "az", "af", "al", "au", "wx", "wy", "wz", "wf", "wl", "wu", "posacc", "velacc", "navstat", "numsats", "posmode", "velmode", "orimode"]def read_imu(path):imu_data = pd.read_csv(path, header=None, sep = " ")imu_data.columns = IMU_NAMEreturn imu_data

publish_utils.py

def publish_imu(imu_pub, imu_data):imu = Imu()imu.header.frame_id = "map"imu.header.stamp = rospy.Time.now()q = transformations.quaternion_from_euler(float(imu_data.roll), float(imu_data.pitch), float(imu_data.yaw))imu.orientation.x = q[0]imu.orientation.y = q[1]imu.orientation.z = q[2]imu.orientation.w = q[3]imu.linear_acceleration.x = imu_data.afimu.linear_acceleration.y = imu_data.alimu.linear_acceleration.z = imu_data.auimu.angular_velocity.x = imu_data.wximu.angular_velocity.y = imu_data.wyimu.angular_velocity.z = imu_data.wzimu_pub.publish(imu)

 kitti.py

#!/usr/bin/env python
import numpy as npimport rospyfrom data_utils import *
from publish_utils import *
import osDATA_PATH = "/home/d501/kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26/2011_09_26_drive_0005_sync"if __name__ == "__main__":frame = 0rospy.init_node("kitti_node", anonymous=True)cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)pcl_pub = rospy.Publisher('kitti_point_cloud', PointCloud2, queue_size=10)ego_pub = rospy.Publisher('kitti_ego_car', MarkerArray, queue_size=10)imu_pub = rospy.Publisher('kitti_imu', Imu, queue_size=10)bridge = CvBridge()rate = rospy.Rate(10)while not rospy.is_shutdown():image = read_camera(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame))point_cloud = read_point_cloud(os.path.join(DATA_PATH, 'velodyne_points/data/%010d.bin'%frame))imu_data = read_imu(os.path.join(DATA_PATH, 'oxts/data/%010d.txt'%frame))publish_camera(cam_pub, bridge, image)publish_point_cloud(pcl_pub, point_cloud)publish_ego_car(ego_pub)publish_imu(imu_pub, imu_data)rospy.loginfo("published")rate.sleep()frame += 1frame %= 154

 订阅话题显示如下:

 发布GPS数据

publish_utils.py添加代码

def publish_gps(gps_pub, gps_data):gps = NavSatFix()gps.header.frame_id = "map"gps.header.stamp = rospy.Time.now()gps.latitude = gps_data.latgps.longitude = gps_data.longps.altitude = gps_data.altgps_pub.publish(gps)

 kitti.py代码

#!/usr/bin/env python
import numpy as npimport rospyfrom data_utils import *
from publish_utils import *
import osDATA_PATH = "/home/d501/kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26/2011_09_26_drive_0005_sync"if __name__ == "__main__":frame = 0rospy.init_node("kitti_node", anonymous=True)cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)pcl_pub = rospy.Publisher('kitti_point_cloud', PointCloud2, queue_size=10)ego_pub = rospy.Publisher('kitti_ego_car', MarkerArray, queue_size=10)imu_pub = rospy.Publisher('kitti_imu', Imu, queue_size=10)gps_pub = rospy.Publisher('kitti_gps', NavSatFix, queue_size=10)bridge = CvBridge()rate = rospy.Rate(10)while not rospy.is_shutdown():image = read_camera(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame))point_cloud = read_point_cloud(os.path.join(DATA_PATH, 'velodyne_points/data/%010d.bin'%frame))imu_data = read_imu(os.path.join(DATA_PATH, 'oxts/data/%010d.txt'%frame))publish_camera(cam_pub, bridge, image)publish_point_cloud(pcl_pub, point_cloud)publish_ego_car(ego_pub)publish_imu(imu_pub, imu_data)publish_gps(gps_pub, imu_data)rospy.loginfo("published")rate.sleep()frame += 1frame %= 154

 画出2D检测框

 参考ROS1结合自动驾驶数据集Kitti开发教程(七)下载图像标注资料并读取显示_kitti 视觉slam00/data.txt-CSDN博客

ROS1结合自动驾驶数据集Kitti开发教程(八)画出2D检测框_ros2如何在image上绘制2d包围框-CSDN博客

 代码:

kitti.py

#!/usr/bin/env python
import numpy as npimport rospyfrom data_utils import *
from publish_utils import *
import osDATA_PATH = "/home/d501/kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26/2011_09_26_drive_0005_sync"if __name__ == "__main__":frame = 0rospy.init_node("kitti_node", anonymous=True)cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)pcl_pub = rospy.Publisher('kitti_point_cloud', PointCloud2, queue_size=10)ego_pub = rospy.Publisher('kitti_ego_car', MarkerArray, queue_size=10)imu_pub = rospy.Publisher('kitti_imu', Imu, queue_size=10)gps_pub = rospy.Publisher('kitti_gps', NavSatFix, queue_size=10)bridge = CvBridge()df_tracking = read_labelfile('/home/d501//kitti_data/2011_09_26/data_tracking_label_2/training/label_02/0000.txt')rate = rospy.Rate(10)while not rospy.is_shutdown():boxes = np.array(df_tracking[df_tracking.frame==frame][['bbox_left','bbox_top','bbox_right','bbox_bottom']])types = np.array(df_tracking[df_tracking.frame==frame]['type'])image = read_camera(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame))point_cloud = read_point_cloud(os.path.join(DATA_PATH, 'velodyne_points/data/%010d.bin'%frame))imu_data = read_imu(os.path.join(DATA_PATH, 'oxts/data/%010d.txt'%frame))publish_camera(cam_pub, bridge, image, boxes, types)publish_point_cloud(pcl_pub, point_cloud)publish_ego_car(ego_pub)publish_imu(imu_pub, imu_data)publish_gps(gps_pub, imu_data)rospy.loginfo("published")rate.sleep()frame += 1frame %= 154

data_utils.py

#!/usr/bin/env pythonimport cv2
import numpy as np
import pandas as pdIMU_NAME = ["lat", "lon", "alt", "roll", "pitch", "yaw", "vn", "ve", "vf", "vl", "vu", "ax", "ay", "az", "af", "al", "au", "wx", "wy", "wz", "wf", "wl", "wu", "posacc", "velacc", "navstat", "numsats", "posmode", "velmode", "orimode"]LABEL_NAME = ["frame", "track id", "type", "truncated", "occluded", "alpha", "bbox_left", "bbox_top", "bbox_right", "bbox_bottom", "dimensions_height", "dimensions_width", "dimensions_length", "location_x", "location_y", "location_z", "rotation_y"]def read_camera(path):return cv2.imread(path)def read_point_cloud(path):return np.fromfile(path, dtype=np.float32).reshape(-1, 4)def read_imu(path):imu_data = pd.read_csv(path, header=None, sep = " ")imu_data.columns = IMU_NAMEreturn imu_datadef read_labelfile(path):data = pd.read_csv(path, header=None, sep=" ")data.columns = LABEL_NAMEdata.loc[data.type.isin(['Van', 'Car', 'Truck']), 'type'] = 'Car'data = data[data.type.isin(['Car', 'Cyclist', 'Pedestrian'])]return data

publish_utils.py

#!/usr/bin/env python
import cv2import rospy
from geometry_msgs.msg import Point
from std_msgs.msg import Header
from sensor_msgs.msg import Image, PointCloud2, Imu, NavSatFix
import sensor_msgs.point_cloud2 as pcl2
from cv_bridge import CvBridge
from tf import transformations
from visualization_msgs.msg import Marker, MarkerArray
import numpy as np
import pandas as pd
import os
FRAME_ID = 'map'DETECTION_COLOR_DICT = {'Car': (255, 255, 0), 'Pedestrian': (0, 226, 255), 'Cyclist': (141, 40, 255)}def publish_camera(cam_pub, bridge, image, boxes, types):for type, box in zip(types, boxes):top_left = int(box[0]), int(box[1])bottom_right = int(box[2]), int(box[3])cv2.rectangle(image, top_left, bottom_right, DETECTION_COLOR_DICT[type], 2)cam_pub.publish(bridge.cv2_to_imgmsg(image, 'bgr8'))def publish_point_cloud(pcl_pub, point_cloud):header = Header()header.stamp = rospy.Time.now()header.frame_id = FRAME_IDpcl_pub.publish(pcl2.create_cloud_xyz32(header, point_cloud[:, :3]))def publish_ego_car(ego_car_pub):"""publish left and right 45 degree FOV lines and ego car model mesh:param ego_car_pub::return:"""marker_array = MarkerArray()marker = Marker()marker.header.frame_id = FRAME_IDmarker.header.stamp = rospy.Time.now()marker.id = 0marker.action = Marker.ADDmarker.lifetime = rospy.Duration()marker.type = Marker.LINE_STRIPmarker.color.r = 0.0marker.color.g = 1.0marker.color.b = 0.0marker.color.a = 1.0marker.scale.x = 0.2marker.points = []marker.points.append(Point(10, -10, 0))marker.points.append(Point(0, 0, 0))marker.points.append(Point(10, 10, 0))marker_array.markers.append(marker)mech_marker = Marker()mech_marker.header.frame_id = "map"mech_marker.header.stamp = rospy.Time.now()mech_marker.id = -1mech_marker.type = Marker.MESH_RESOURCEmech_marker.action = Marker.ADDmech_marker.lifetime = rospy.Duration()mech_marker.mesh_resource = "package://kitti_tutorial/meshes/Car.dae"mech_marker.pose.position.x = 0.0mech_marker.pose.position.y = 0.0mech_marker.pose.position.z = -1.73q = transformations.quaternion_from_euler(np.pi, np.pi, -np.pi / 2.0)mech_marker.pose.orientation.x = q[0]mech_marker.pose.orientation.y = q[1]mech_marker.pose.orientation.z = q[2]mech_marker.pose.orientation.w = q[3]mech_marker.color.r = 1.0mech_marker.color.g = 1.0mech_marker.color.b = 1.0mech_marker.color.a = 1.0mech_marker.scale.x = 1.0mech_marker.scale.y = 1.0mech_marker.scale.z = 1.0marker_array.markers.append(mech_marker)ego_car_pub.publish(marker_array)def publish_imu(imu_pub, imu_data):imu = Imu()imu.header.frame_id = "map"imu.header.stamp = rospy.Time.now()q = transformations.quaternion_from_euler(float(imu_data.roll), float(imu_data.pitch), float(imu_data.yaw))imu.orientation.x = q[0]imu.orientation.y = q[1]imu.orientation.z = q[2]imu.orientation.w = q[3]imu.linear_acceleration.x = imu_data.afimu.linear_acceleration.y = imu_data.alimu.linear_acceleration.z = imu_data.auimu.angular_velocity.x = imu_data.wximu.angular_velocity.y = imu_data.wyimu.angular_velocity.z = imu_data.wzimu_pub.publish(imu)def publish_gps(gps_pub, gps_data):gps = NavSatFix()gps.header.frame_id = "map"gps.header.stamp = rospy.Time.now()gps.latitude = gps_data.latgps.longitude = gps_data.longps.altitude = gps_data.altgps_pub.publish(gps)

 结果如下:

 

画出3D检测框 

参考:ROS1结合自动驾驶数据集Kitti开发教程(九)画出3D检测框[1]_ros 画3d框-CSDN博客 

下面贴出代码:

publish_utils.py

#!/usr/bin/env python
import cv2import rospy
from geometry_msgs.msg import Point
from std_msgs.msg import Header
from sensor_msgs.msg import Image, PointCloud2, Imu, NavSatFix
import sensor_msgs.point_cloud2 as pcl2
from cv_bridge import CvBridge
from tf import transformations
from visualization_msgs.msg import Marker, MarkerArray
import numpy as np
import pandas as pd
import os
FRAME_ID = 'map'DETECTION_COLOR_DICT = {'Car': (255, 255, 0), 'Pedestrian': (0, 226, 255), 'Cyclist': (141, 40, 255)}
LIFETIME = 0.1
LINES = [[0, 1], [1, 2], [2, 3], [3, 0]]  # Lower plane parallel to Z=0 plane
LINES += [[4, 5], [5, 6], [6, 7], [7, 4]]  # Upper plane parallel to Z=0 plane
LINES += [[0, 4], [1, 5], [2, 6], [3, 7]]  # Connections between upper and lower planes
LINES += [[4,1], [5,0]]def publish_camera(cam_pub, bridge, image, boxes, types):for type, box in zip(types, boxes):top_left = int(box[0]), int(box[1])bottom_right = int(box[2]), int(box[3])cv2.rectangle(image, top_left, bottom_right, DETECTION_COLOR_DICT[type], 2)cam_pub.publish(bridge.cv2_to_imgmsg(image, 'bgr8'))def publish_point_cloud(pcl_pub, point_cloud):header = Header()header.stamp = rospy.Time.now()header.frame_id = FRAME_IDpcl_pub.publish(pcl2.create_cloud_xyz32(header, point_cloud[:, :3]))def publish_ego_car(ego_car_pub):"""publish left and right 45 degree FOV lines and ego car model mesh:param ego_car_pub::return:"""marker_array = MarkerArray()marker = Marker()marker.header.frame_id = FRAME_IDmarker.header.stamp = rospy.Time.now()marker.id = 0marker.action = Marker.ADDmarker.lifetime = rospy.Duration()marker.type = Marker.LINE_STRIPmarker.color.r = 0.0marker.color.g = 1.0marker.color.b = 0.0marker.color.a = 1.0marker.scale.x = 0.2marker.points = []marker.points.append(Point(10, -10, 0))marker.points.append(Point(0, 0, 0))marker.points.append(Point(10, 10, 0))marker_array.markers.append(marker)mech_marker = Marker()mech_marker.header.frame_id = "map"mech_marker.header.stamp = rospy.Time.now()mech_marker.id = -1mech_marker.type = Marker.MESH_RESOURCEmech_marker.action = Marker.ADDmech_marker.lifetime = rospy.Duration()mech_marker.mesh_resource = "package://kitti_tutorial/meshes/Car.dae"mech_marker.pose.position.x = 0.0mech_marker.pose.position.y = 0.0mech_marker.pose.position.z = -1.73q = transformations.quaternion_from_euler(np.pi, np.pi, -np.pi / 2.0)mech_marker.pose.orientation.x = q[0]mech_marker.pose.orientation.y = q[1]mech_marker.pose.orientation.z = q[2]mech_marker.pose.orientation.w = q[3]mech_marker.color.r = 1.0mech_marker.color.g = 1.0mech_marker.color.b = 1.0mech_marker.color.a = 1.0mech_marker.scale.x = 1.0mech_marker.scale.y = 1.0mech_marker.scale.z = 1.0marker_array.markers.append(mech_marker)ego_car_pub.publish(marker_array)def publish_imu(imu_pub, imu_data):imu = Imu()imu.header.frame_id = "map"imu.header.stamp = rospy.Time.now()q = transformations.quaternion_from_euler(float(imu_data.roll), float(imu_data.pitch), float(imu_data.yaw))imu.orientation.x = q[0]imu.orientation.y = q[1]imu.orientation.z = q[2]imu.orientation.w = q[3]imu.linear_acceleration.x = imu_data.afimu.linear_acceleration.y = imu_data.alimu.linear_acceleration.z = imu_data.auimu.angular_velocity.x = imu_data.wximu.angular_velocity.y = imu_data.wyimu.angular_velocity.z = imu_data.wzimu_pub.publish(imu)def publish_gps(gps_pub, gps_data):gps = NavSatFix()gps.header.frame_id = "map"gps.header.stamp = rospy.Time.now()gps.latitude = gps_data.latgps.longitude = gps_data.longps.altitude = gps_data.altgps_pub.publish(gps)def publish_3dbox(box3d_pub, corners_3d_velos, types):marker_array = MarkerArray()for i, corners_3d_velo in enumerate(corners_3d_velos):marker = Marker()marker.header.frame_id = FRAME_IDmarker.header.stamp = rospy.Time.now()marker.id = imarker.action = Marker.ADDmarker.lifetime = rospy.Duration(LIFETIME)marker.type = Marker.LINE_LISTb, g, r = DETECTION_COLOR_DICT[types[i]]marker.color.r = r/255.0marker.color.g = g/255.0marker.color.b = b/255.0marker.color.a = 1.0marker.scale.x = 0.1marker.points = []for l in LINES:p1 = corners_3d_velo[l[0]]marker.points.append(Point(p1[0], p1[1], p1[2]))p2 = corners_3d_velo[l[1]]marker.points.append(Point(p2[0], p2[1], p2[2]))marker_array.markers.append(marker)box3d_pub.publish(marker_array)

 下载calibration转换程序:kitti_util.py

 kitti.py

#!/usr/bin/env python
#  -*-coding:utf8 -*-
import numpy as npimport rospyfrom data_utils import *
from publish_utils import *
from kitti_util import *
import osDATA_PATH = "/home/d501/kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26/2011_09_26_drive_0005_sync"def compute_3d_box_cam2(h, w, l, x, y, z, yaw):# 计算旋转矩阵R = np.array([[np.cos(yaw), 0, np.sin(yaw)], [0, 1, 0], [-np.sin(yaw), 0, np.cos(yaw)]])# 8个顶点的xyzx_corners = [l/2,l/2,-l/2,-l/2,l/2,l/2,-l/2,-l/2]y_corners = [0,0,0,0,-h,-h,-h,-h]z_corners = [w/2,-w/2,-w/2,w/2,w/2,-w/2,-w/2,w/2]# 旋转矩阵点乘(3,8)顶点矩阵corners_3d_cam2 = np.dot(R, np.vstack([x_corners,y_corners,z_corners]))# 加上location中心点,得出8个顶点旋转后的坐标corners_3d_cam2 += np.vstack([x,y,z])return corners_3d_cam2if __name__ == "__main__":frame = 0rospy.init_node("kitti_node", anonymous=True)cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)pcl_pub = rospy.Publisher('kitti_point_cloud', PointCloud2, queue_size=10)ego_pub = rospy.Publisher('kitti_ego_car', MarkerArray, queue_size=10)imu_pub = rospy.Publisher('kitti_imu', Imu, queue_size=10)gps_pub = rospy.Publisher('kitti_gps', NavSatFix, queue_size=10)box3d_pub = rospy.Publisher('kitti_3d', MarkerArray, queue_size=10)bridge = CvBridge()df_tracking = read_labelfile('/home/d501//kitti_data/2011_09_26/data_tracking_label_2/training/label_02/0000.txt')calib = Calibration("/home/d501//kitti_data/2011_09_26/2011_09_26_drive_0005_sync/2011_09_26", from_video=True)rate = rospy.Rate(10)while not rospy.is_shutdown():df_tracking_frame = df_tracking[df_tracking.frame==frame]boxes_2d = np.array(df_tracking[df_tracking.frame==frame][['bbox_left','bbox_top','bbox_right','bbox_bottom']])types = np.array(df_tracking[df_tracking.frame==frame]['type'])boxes_3d = np.array(df_tracking_frame[['dimensions_height','dimensions_width','dimensions_length','location_x','location_y','location_z','rotation_y']])corners_3d_velos = []for box_3d in boxes_3d:corners_3d_cam2 = compute_3d_box_cam2(*box_3d)corners_3d_velo = calib.project_rect_to_velo(corners_3d_cam2.T)corners_3d_velos += [corners_3d_velo]image = read_camera(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame))point_cloud = read_point_cloud(os.path.join(DATA_PATH, 'velodyne_points/data/%010d.bin'%frame))imu_data = read_imu(os.path.join(DATA_PATH, 'oxts/data/%010d.txt'%frame))publish_camera(cam_pub, bridge, image, boxes_2d, types)publish_point_cloud(pcl_pub, point_cloud)publish_ego_car(ego_pub)publish_imu(imu_pub, imu_data)publish_gps(gps_pub, imu_data)publish_3dbox(box3d_pub, corners_3d_velos, types)rospy.loginfo("published")rate.sleep()frame += 1frame %= 154

运行截图: 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.bcls.cn/Tmmb/5550.shtml

如若内容造成侵权/违法违规/事实不符,请联系编程老四网进行投诉反馈email:xxxxxxxx@qq.com,一经查实,立即删除!

相关文章

《Docker 简易速速上手小册》第4章 Docker 容器管理(2024 最新版)

文章目录 4.1 容器生命周期管理4.1.1 重点基础知识4.1.2 重点案例:启动并管理 Python Flask 应用容器4.1.3 拓展案例 1:调试运行中的容器4.1.4 拓展案例 2:优雅地停止和清理容器 4.2 容器数据管理与持久化4.2.1 重点基础知识4.2.2 重点案例&a…

数字孪生的技术开发平台

数字孪生的开发平台可以基于各种软件和硬件工具来实现,这些平台提供了丰富的功能和工具,帮助开发人员构建、部署和管理数字孪生系统,根据具体的需求和技术要求,开发人员可以选择合适的平台进行开发工作。以下列举了一些常见的数字…

离散数学——树思维导图

离散数学——树思维导图 文章目录 前言内容大纲参考 前言 这是当初学习离散数学时整理的笔记大纲,其中包含了自己对于一些知识点的体悟。现将其放在这里作为备份,也希望能够对你有所帮助。 当初记录这些笔记只是为了在复习时更快地找到对应的知识点。…

蓝桥杯:真题讲解2(C++版)附带解析

星系炸弹 来自:2015年六届省赛大学B组真题(共6道题) 分析:这题涉及到平年和闰年的知识,如果我们要解这题,首先要知道每月有多少天,其实也就是看2月份的天数,其它月份的天数都是一样的&#xff…

铭瑄科技——为星闪技术发展与应用带来新推力

随着智能化生活逐渐普及,无线通信不仅是不仅是信息时代的重要基础设施,而且是推动社会向智能化发展的核心力量之一,其中短距无线通信更是推动未来智能化发展的关键。 为积极推动未来硬件智能化、产业智能化发展,铭瑄正式宣布成为星…

微信小程序 --- 自定义组件

自定义组件 1. 创建-注册-使用组件 组件介绍 小程序目前已经支持组件化开发,可以将页面中的功能模块抽取成自定义组件,以便在不同的页面中重复使用; 也可以将复杂的页面拆分成多个低耦合的模块,有助于代码维护。 开发中常见的…

vue3 使用qrcodejs2-fix生成二维码并可下载保存

直接上代码 <el-button click‘setEwm’>打开弹框二维码</el-button><el-dialog v-model"centerDialogVisible" align-center ><div class"code"><div class"content" id"qrCodeUrl" ref"qrCodeUrl&q…

【MySQL面试复习】谈一谈你对SQL的优化经验

系列文章目录 在MySQL中&#xff0c;如何定位慢查询&#xff1f; 发现了某个SQL语句执行很慢&#xff0c;如何进行分析&#xff1f; 了解过索引吗&#xff1f;(索引的底层原理)/B 树和B树的区别是什么&#xff1f; 什么是聚簇索引&#xff08;聚集索引&#xff09;和非聚簇索引…

免费享受企业级安全:雷池社区版WAF,高效专业的Web安全的方案

网站安全成为了每个企业及个人不可忽视的重要议题。 随着网络攻击手段日益狡猾和复杂&#xff0c;选择一个强大的安全防护平台变得尤为关键。 推荐的雷池社区版——一个为网站提供全面安全防护解决方案的平台&#xff0c;它不仅具备高效的安全防护能力&#xff0c;还让网站安…

2024.2.29 模拟实现 RabbitMQ —— 项目展示

目录 项目介绍 核心功能 核心技术 演示直接交换机 演示扇出交换机 演示主题交换机 项目介绍 此处我们模拟 RabbitMQ 实现了一个消息队列服务器 核心功能 提供了 虚拟主机、交换机、队列、绑定、消息 概念的管理九大核心 API 创建队列、销毁队列、创建交换机、销毁交换机、…

能碳双控| AIRIOT智慧能碳管理解决方案

在当前全球气候变化和可持续发展的背景下&#xff0c;建设能碳管理平台成为组织迎接挑战、提升可持续性的重要一环&#xff0c;有助于组织实现可持续发展目标&#xff0c;提高社会责任形象&#xff0c;同时适应未来碳排放管理的挑战。能碳管理是一个涉及跟踪、报告和减少组织碳…

pikachu靶场-RCE

介绍&#xff1a; RCE(remote command/code execute)概述 RCE漏洞&#xff0c;可以让攻击者直接向后台服务器远程注入操作系统命令或者代码&#xff0c;从而控制后台系统。 远程系统命令执行 一般出现这种漏洞&#xff0c;是因为应用系统从设计上需要给用户提供指定的远程命…

详解顺序结构滑动窗口处理算法

&#x1f380;个人主页&#xff1a; https://zhangxiaoshu.blog.csdn.net &#x1f4e2;欢迎大家&#xff1a;关注&#x1f50d;点赞&#x1f44d;评论&#x1f4dd;收藏⭐️&#xff0c;如有错误敬请指正! &#x1f495;未来很长&#xff0c;值得我们全力奔赴更美好的生活&…

Another Redis Desktop Manager工具连接集群

背景&#xff1a;使用Another Redis Desktop Manager连接redsi集群 win10安装 使用 下载 某盘&#xff1a; 链接&#xff1a;https://pan.baidu.com/s/1dg9kPm9Av8-bbpDfDg9DsA 提取码&#xff1a;t1sm 使用

(っ•̀ω•́)っ 如何在PPT中为文本框添加滚动条

本人在写技术分享的PPT时&#xff0c;遇到问题&#xff1a;有一大篇的代码&#xff0c;如何在一张PPT页面上显示&#xff1f;急需带有滚动条的文本框&#xff01;百度了不少&#xff0c;自己也来总结一篇&#xff0c;如下&#xff1a; 1、找到【文件】-【选项】 2、【自定义功…

Java面试笔记

Java面试笔记 Java面试笔记-网络模块 TCP的三次握手 TCP的简介&#xff1a; 面向连接的、可靠的、基于字节流的传输层通信协议 将应用层的数据流分割成报文段并发送给目标节点的TCP层 数据包都有序号&#xff0c;对方收到则发送ACK确认&#xff0c;未收到则重传 使用校验和来…

嵌入式Qt 实现用户界面与业务逻辑分离

一.基本程序框架一般包含 二.框架的基本设计原则 三.用户界面与业务逻辑的交互 四.代码实现计算器用户界面与业务逻辑 ICalculator.h #ifndef _ICALCULATOR_H_ #define _ICALCULATOR_H_#include <QString>class ICalculator { public:virtual bool expression(const QSt…

嵌入式中逻辑分析仪基本操作方法

前期准备 1.一块能触摸的屏对应的主板机 2.逻辑分析仪对应的软件工具 3.对应的拓展板 4.确定拓展板的引脚分布情况 第一步&#xff1a;逻辑分析仪j基本操作 1.数据捕捉需要先进行对应软件安装,并按照需求进行配置 2.这里以A20为例:此手机使用显示驱动芯片CST148,触摸屏分辨…

Python爬虫实战:从API获取数据

引言 在现代软件开发中&#xff0c;API已经成为获取数据的主要方式之一。API允许不同的软件应用程序相互通信&#xff0c;共享数据和功能。在本文中&#xff0c;我们将学习如何使用Python从API获取数据&#xff0c;并探讨其在实际应用中的价值。 目录 引言 二、API基础知识 …

【c语言】字符函数和字符串函数(下)

前言 书接上回 【c语言】字符函数和字符串函数(上) 上一篇讲解的strcpy、strcat、strcmp函数的字符串长度是不受限制的 而本篇strncpy、strncat、strcnmp函数的字符串长度是受限制的 欢迎关注个人主页&#xff1a;逸狼 创造不易&#xff0c;可以点点赞吗~ 如有错误&#xff0c;…
推荐文章