分享

基于端到端深度学习的自动驾驶:AirSim教程(包含Ubuntu18.04下配置AIrsim仿真环境解决方案)

 netouch 2023-08-03 发布于北京

这是微软自动驾驶食谱的第一个教程(目前共两个)。之前看到过,这里记录一下。

https://github.com/microsoft/AutonomousDrivingCookbook

前言

在本教程中,将学习如何使用AirSim仿真环境收集的数据来训练和测试自动驾驶的端到端深度学习模型。将训练一个模型,学习如何通过山脉/景观地图的一部分,在AirSim使用一个单一的正面面对网络摄像头(webcam)采集到的画面作为输入去操纵汽车。这样的任务通常被认为是自动驾驶的“hello world”,

教程结构

Keras框架。

步骤0 -数据探索和准备

概述

训练一个深度学习模型,
输入:摄像头的画面和车辆最后已知状态
输出:转向角度预测。

端到端自动驾驶

不用解释了吧。不像需要特征工程等传统机器学习方法,数据输入神经网络,直接得到输出。唯一的缺点就是需要大量数据,但模拟器可以用来采集数据,之后用少量的真实数据进行微调(Behavioral Cloning),便可实现端到端自动驾驶。

下载数据集:

https:///AirSimTutorialDataset

数据集的百度网盘链接:

链接:https://pan.baidu.com/s/1l_YJ6c9VAJS_pkIJeSWSFw
提取码:fwr3
复制这段内容后打开百度网盘手机App,操作更方便哦

代码解读如下:

注:<< ...... >>符号代表你需要对代码根据自己的实际路径进行修改。


%matplotlib inline
import numpy as np
import pandas as pd
import h5py
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw
import os
import Cooking
import random

# << Point this to the directory containing the raw data >>
RAW_DATA_DIR = 'data_raw/'

# << Point this to the desired output directory for the cooked (.h5) data >>
COOKED_DATA_DIR = 'data_cooked/'

# The folders to search for data under RAW_DATA_DIR
# For example, the first folder searched will be RAW_DATA_DIR/normal_1
DATA_FOLDERS = ['normal_1', 'normal_2', 'normal_3', 'normal_4', 'normal_5', 'normal_6', 'swerve_1', 'swerve_2', 'swerve_3']

# The size of the figures in this notebook
FIGURE_SIZE = (10,10)

数据集由两部分组成:图像和.tsv 文件。先看一下.tsv 文件格式。


sample_tsv_path = os.path.join(RAW_DATA_DIR, 'normal_1/airsim_rec.txt')
sample_tsv = pd.read_csv(sample_tsv_path, sep='\t') # https://blog.csdn.net/b876144622/article/details/80781917
sample_tsv.head()

在这里插入图片描述
数据集包含标签:转向角度;图像名称等。
看一个图片:normal_1文件夹下的img_0。


sample_image_path = os.path.join(RAW_DATA_DIR, 'normal_1/images/img_0.png')
sample_image = Image.open(sample_image_path)
plt.title('Sample Image')
plt.imshow(sample_image)
plt.show()

在这里插入图片描述
我们只对图像中的一小部分感兴趣,ROI区域如下图中红框区域:

sample_image_roi = sample_image.copy()

fillcolor=(255,0,0)
draw = ImageDraw.Draw(sample_image_roi)
points = [(1,76), (1,135), (255,135), (255,76)]
for i in range(0, len(points), 1): # 1是步长,默认为1,不写也行。https://www.runoob.com/python/python-func-range.html
    draw.line([points[i], points[(i 1)%len(points)]], fill=fillcolor, width=3) # 这里的写法,可以学习
del draw

plt.title('Image with sample ROI')
plt.imshow(sample_image_roi)
plt.show()

在这里插入图片描述
提取ROI既可以减少训练时间,也可以减少训练模型所需的数据量。它还可以防止模型因关注于环境中的不相关特征(例如山、树等)而混淆。
数据增强

  1. 沿竖直方向镜像图像,同时将转角取负号。
  2. 更改全局光照

把所有标签放到一个变量中,以更好地观察。

full_path_raw_folders = [os.path.join(RAW_DATA_DIR, f) for f in DATA_FOLDERS]

dataframes = []
for folder in full_path_raw_folders:
    current_dataframe = pd.read_csv(os.path.join(folder, 'airsim_rec.txt'), sep='\t')
    current_dataframe['Folder'] = folder
    dataframes.append(current_dataframe)
    
dataset = pd.concat(dataframes, axis=0) # 把{list:9}变成1个DataFrame{46738,8}

print('Number of data points: {0}'.format(dataset.shape[0]))

dataset.head()

在这里插入图片描述
观察文件夹的命名:'normal’和'swerve’两种,这指的是两种不同的驾驶策略,我们看一下二者区别。从每个驾驶风格对彼此绘制数据点的一部分。

min_index = 100
max_index = 1100
steering_angles_normal_1 = dataset[dataset['Folder'].apply(lambda v: 'normal_1' in v)]['Steering'][min_index:max_index] # 这里的写法堪称牛逼
steering_angles_swerve_1 = dataset[dataset['Folder'].apply(lambda v: 'swerve_1' in v)]['Steering'][min_index:max_index]

plot_index = [i for i in range(min_index, max_index, 1)]

fig = plt.figure(figsize=FIGURE_SIZE)
ax1 = fig.add_subplot(111)

ax1.scatter(plot_index, steering_angles_normal_1, c='b', marker='o', label='normal_1')
ax1.scatter(plot_index, steering_angles_swerve_1, c='r', marker='o', label='swerve_1')
plt.legend(loc='upper left');
plt.title('Steering Angles for normal_1 and swerve_1 runs')
plt.xlabel('Time')
plt.ylabel('Steering Angle')
plt.show()

在这里插入图片描述
蓝色的点显示的是正常的驾驶策略,转向角度接近于零,车在道路上大部分时间都是直线行驶。
转弯驾驶策略使汽车几乎在道路上左右摇摆。在训练端到端深度学习模型时,由于我们没有做任何特征工程,我们的模型几乎完全依赖数据集来提供它需要的所有必要信息。因此,要考虑到模型可能遇到的任何急转弯,并在它开始偏离道路时给予它纠正自身的能力,我们需要在训练时为它提供足够的这样的例子。因此,我们创建了这些额外的数据集来关注这些场景。

现在,让我们看看每个类别中的数据点数量。

dataset['Is Swerve'] = dataset.apply(lambda r: 'swerve' in r['Folder'], axis=1) # https://www.cnblogs.com/liulangmao/p/9342806.html。直呼内行
grouped = dataset.groupby(by=['Is Swerve']).size().reset_index() # 关于pandas的用法还是太强了。。。
grouped.columns = ['Is Swerve', 'Count']

def make_autopct(values):
    def my_autopct(percent):
        total = sum(values)
        val = int(round(percent*total/100.0))
        return '{0:.2f}%  ({1:d})'.format(percent,val)
    return my_autopct

pie_labels = ['Normal', 'Swerve']
fig, ax = plt.subplots(figsize=FIGURE_SIZE) # 函数返回一个figure图像和子图ax的array列表。https://www.cnblogs.com/komean/p/10670619.html
ax.pie(grouped['Count'], labels=pie_labels, autopct = make_autopct(grouped['Count'])) # https://www.cnblogs.com/biyoulin/p/9565350.html
plt.title('Number of data points per driving strategy')
plt.show()

在这里插入图片描述
1/4是Swerve数据,剩余的是normal数据,且只有47,000数据,因此网络不能太深。

让我们看看这两种策略下标签的分布情况。

bins = np.arange(-1, 1.05, 0.05)
normal_labels = dataset[dataset['Is Swerve'] == False]['Steering']
swerve_labels = dataset[dataset['Is Swerve'] == True]['Steering']

def steering_histogram(hist_labels, title, color):
    plt.figure(figsize=FIGURE_SIZE)
    n, b, p = plt.hist(hist_labels.as_matrix(), bins, normed=1, facecolor=color) # normed:参数指定密度,也就是每个条状图的占比例比,默认为1
    plt.xlabel('Steering Angle')
    plt.ylabel('Normalized Frequency')
    plt.title(title)
    plt.show()

steering_histogram(normal_labels, 'Normal label distribution', 'g') # https://blog.csdn.net/weixin_43085694/article/details/104147348
steering_histogram(swerve_labels, 'Swerve label distribution', 'r') # https://blog.csdn.net/m0_45408211/article/details/107583589

在这里插入图片描述
在这里插入图片描述
两个结论:

  • 汽车正常行驶时,转向角度几乎总是为零。这是一个严重的不平衡,如果这部分数据没有采样,模型将总是预测零,汽车将无法转弯。
  • 当使用转向策略驾驶汽车时,我们会得到在正常策略数据集中不会出现的急转弯例子。这验证了我们收集上述数据背后的原因。(将数据集分类normal和swerve两类)

此时,我们需要将原始数据合并成适合训练的压缩数据文件。这里,我们将使用**.h5文件**,因为这种格式非常适合支持大型数据集,而不需要一下子将所有数据都读入内存。它还可以与Keras无缝地工作。
编写数据集的代码很简单,但是很长。当它终止时,最终的数据集将有4部分:

  • image:图片数据,numpy array
  • previous_state:汽车的最后已知状态,numpy array,(steering, throttle, brake, speed)元组格式。
  • label:转角(我们要预测的),归一化到[-1,1]之间,numpy array
  • metadata:关于文件的metadata(他们来自哪里等),numpy array

我们把他们分成train/test/validation三部分。


train_eval_test_split = [0.7, 0.2, 0.1]
full_path_raw_folders = [os.path.join(RAW_DATA_DIR, f) for f in DATA_FOLDERS]
Cooking.cook(full_path_raw_folders, COOKED_DATA_DIR, train_eval_test_split)

在这里插入图片描述上述文件中导入的本地模块import Cooking解读如下:

import random
import csv
from PIL import Image
import numpy as np
import pandas as pd
import sys
import os
import errno
from collections import OrderedDict
import h5py
from pathlib import Path
import copy
import re

def checkAndCreateDir(full_path):
    '''Checks if a given path exists and if not, creates the needed directories.
            Inputs:
                full_path: path to be checked
    '''
    if not os.path.exists(os.path.dirname(full_path)):
        try:
            os.makedirs(os.path.dirname(full_path))
        except OSError as exc:  # Guard against race condition
            if exc.errno != errno.EEXIST:
                raise
                
def readImagesFromPath(image_names):
    ''' Takes in a path and a list of image file names to be loaded and returns a list of all loaded images after resize.
           Inputs:
                image_names: list of image names
           Returns:
                List of all loaded and resized images
    '''
    returnValue = []
    for image_name in image_names:
        im = Image.open(image_name)
        imArr = np.asarray(im)
        
        #Remove alpha channel if exists
        if len(imArr.shape) == 3 and imArr.shape[2] == 4: # 如果imArr的形状是3并且通道维度是4的话
            if (np.all(imArr[:, :, 3] == imArr[0, 0, 3])): # 如果所有图像的alpha通道都相等
                imArr = imArr[:,:,0:3] 3 # 移除alpha通道
        if len(imArr.shape) != 3 or imArr.shape[2] != 3:
            print('Error: Image', image_name, 'is not RGB.')
            sys.exit()            

        returnIm = np.asarray(imArr)

        returnValue.append(returnIm)
    return returnValue # 返回列表{list:32}。每个元素的形状是【144,256,3】
    
    
    
def splitTrainValidationAndTestData(all_data_mappings, split_ratio=(0.7, 0.2, 0.1)):
    '''Simple function to create train, validation and test splits on the data.
            Inputs:
                all_data_mappings: mappings from the entire dataset
                split_ratio: (train, validation, test) split ratio

            Returns:
                train_data_mappings: mappings for training data
                validation_data_mappings: mappings for validation data
                test_data_mappings: mappings for test data

    '''
    if round(sum(split_ratio), 5) != 1.0:
        print('Error: Your splitting ratio should add up to 1')
        sys.exit()

    train_split = int(len(all_data_mappings) * split_ratio[0])
    val_split = train_split   int(len(all_data_mappings) * split_ratio[1])

    train_data_mappings = all_data_mappings[0:train_split]
    validation_data_mappings = all_data_mappings[train_split:val_split]
    test_data_mappings = all_data_mappings[val_split:]

    return [train_data_mappings, validation_data_mappings, test_data_mappings]
    
def generateDataMapAirSim(folders):
    ''' Data map generator for simulator(AirSim) data. Reads the driving_log csv file and returns a list of 'center camera image name - label(s)' tuples
           Inputs:
               folders: list of folders to collect data from

           Returns:
               mappings: All data mappings as a dictionary. Key is the image filepath, the values are a 2-tuple:
                   0 -> label(s) as a list of double
                   1 -> previous state as a list of double
    '''

    all_mappings = {}
    for folder in folders:
        print('Reading data from {0}...'.format(folder))
        current_df = pd.read_csv(os.path.join(folder, 'airsim_rec.txt'), sep='\t')
        
        for i in range(1, current_df.shape[0] - 1, 1): # 因为包含之前的状态,所以从第1个开始,倒数第二个结束。
            previous_state = list(current_df.iloc[i-1][['Steering', 'Throttle', 'Brake', 'Speed (kmph)']])
            current_label = list((current_df.iloc[i][['Steering']]   current_df.iloc[i-1][['Steering']]   current_df.iloc[i 1][['Steering']]) / 3.0) # 用当下转角及前后转角的平均值作为label
            
            image_filepath = os.path.join(os.path.join(folder, 'images'), current_df.iloc[i]['ImageName']).replace('\\', '/')
            
            # Sanity check
            if (image_filepath in all_mappings):
                print('Error: attempting to add image {0} twice.'.format(image_filepath))
            
            all_mappings[image_filepath] = (current_label, previous_state) # all_mappings:字典{当下图像:(【转角(取当下和前后的平均值)】,【上一刻状态:转角,油门,刹车,速度】,{'data_raw/normal_1/images/img_1.png': ([-0.011840666666666668], [0.0, 0.0, 0.0, 0]),  ……共计46720
    
    mappings = [(key, all_mappings[key]) for key in all_mappings]# mappings:列表。[('data_raw/normal_1/images/img_1.png', ([-0.011840666666666668], [0.0, 0.0, 0.0, 0])),……共计46720
    
    random.shuffle(mappings)
    
    return mappings

def generatorForH5py(data_mappings, chunk_size=32):
    '''
    This function batches the data for saving to the H5 file
    '''
    for chunk_id in range(0, len(data_mappings), chunk_size):
        # Data is expected to be a dict of <image: (label, previousious_state)>
        # Extract the parts
        data_chunk = data_mappings[chunk_id:chunk_id   chunk_size]
        if (len(data_chunk) == chunk_size):
            image_names_chunk = [a for (a, b) in data_chunk]
            labels_chunk = np.asarray([b[0] for (a, b) in data_chunk])
            previous_state_chunk = np.asarray([b[1] for (a, b) in data_chunk])
            
            #Flatten and yield as tuple
            yield (image_names_chunk, labels_chunk.astype(float), previous_state_chunk.astype(float)) # 对于yiedl,把他当成return。https://blog.csdn.net/mieleizhi0522/article/details/82142856/。关于生成器和迭代器的概念自己好像一直不太懂。。。
            if chunk_id   chunk_size > len(data_mappings): # 对于多余的就不要了 
                raise StopIteration
    raise StopIteration
    
def saveH5pyData(data_mappings, target_file_path):
    '''
    Saves H5 data to file
    '''
    chunk_size = 32
    gen = generatorForH5py(data_mappings,chunk_size) # 实例化一个生成器

    image_names_chunk, labels_chunk, previous_state_chunk = next(gen)
    images_chunk = np.asarray(readImagesFromPath(image_names_chunk)) # 读取一组(32张)图片。images_chunk:【32,144,256,3】
    row_count = images_chunk.shape[0] # 可理解为一个batch的大小,用来计数

    checkAndCreateDir(target_file_path) # 检查目标文件是否存在
    with h5py.File(target_file_path, 'w') as f: # 打开文件并开始写入。https://blog.csdn.net/qq_34859482/article/details/80115237

        # Initialize a resizable dataset to hold the output
        images_chunk_maxshape = (None,)   images_chunk.shape[1:]
        labels_chunk_maxshape = (None,)   labels_chunk.shape[1:]
        previous_state_maxshape = (None,)   previous_state_chunk.shape[1:]

        dset_images = f.create_dataset('image', shape=images_chunk.shape, maxshape=images_chunk_maxshape, chunks=images_chunk.shape, dtype=images_chunk.dtype) # 创建数据集,关于这几个参数:'image':名字;shape:数据集形状。(32,144,256,3);maxshape:使数据集的大小可调整为此形状。(None, 144, 256, 3);chunks:每一小块的形状?(32, 144, 256, 3);dtype:数据类型。

        dset_labels = f.create_dataset('label', shape=labels_chunk.shape, maxshape=labels_chunk_maxshape, chunks=labels_chunk.shape, dtype=labels_chunk.dtype) # 这里应该只是占个内存
        
        dset_previous_state = f.create_dataset('previous_state', shape=previous_state_chunk.shape, maxshape=previous_state_maxshape,
                                       chunks=previous_state_chunk.shape, dtype=previous_state_chunk.dtype)
                                       
        dset_images[:] = images_chunk # 这里赋值
        dset_labels[:] = labels_chunk
        dset_previous_state[:] = previous_state_chunk

        for image_names_chunk, label_chunk, previous_state_chunk in gen: 
            image_chunk = np.asarray(readImagesFromPath(image_names_chunk)) # 上一次读取image_names_chunk是为了创建h5文件的格式(数据结构),这里是为了迭代读取
            
            # Resize the dataset to accommodate the next chunk of rows
            dset_images.resize(row_count   image_chunk.shape[0], axis=0) # 扩充形状
            dset_labels.resize(row_count   label_chunk.shape[0], axis=0)
            dset_previous_state.resize(row_count   previous_state_chunk.shape[0], axis=0)
            # Write the next chunk
            dset_images[row_count:] = image_chunk # 接着向下赋值
            dset_labels[row_count:] = label_chunk
            dset_previous_state[row_count:] = previous_state_chunk

            # Increment the row count
            row_count  = image_chunk.shape[0]
            
            
def cook(folders, output_directory, train_eval_test_split):
    ''' Primary function for data pre-processing. Reads and saves all data as h5 files.
            Inputs:
                folders: a list of all data folders
                output_directory: location for saving h5 files
                train_eval_test_split: dataset split ratio
    '''
    output_files = [os.path.join(output_directory, f) for f in ['train.h5', 'eval.h5', 'test.h5']]
    if (any([os.path.isfile(f) for f in output_files])):
       print('Preprocessed data already exists at: {0}. Skipping preprocessing.'.format(output_directory))

    else:
        all_data_mappings = generateDataMapAirSim(folders) # all_data_mappings:[('data_raw/normal_1/images/img_1662.png', ([-0.007812999999999999], [-0.007812999999999999, 0.501961, 0.0, 18])),……共计46720。数据的一个影射
        
        split_mappings = splitTrainValidationAndTestData(all_data_mappings, split_ratio=train_eval_test_split) # 把数据分为了三部分。{list:3} 32703、
        
        for i in range(0, len(split_mappings), 1):
            print('Processing {0}...'.format(output_files[i]))
            saveH5pyData(split_mappings[i], output_files[i])
            print('Finished saving {0}.'.format(output_files[i]))

又出bug了,报错信息如下:

Processing data_cooked/train.h5...
Traceback (most recent call last):
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/Cooking.py', line 130, in generatorForH5py
    raise StopIteration
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/DataExplorationAndPreparation.py', line 133, in <module>
    Cooking.cook(full_path_raw_folders, COOKED_DATA_DIR, train_eval_test_split)
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/Cooking.py', line 197, in cook
    saveH5pyData(split_mappings[i], output_files[i])
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/Cooking.py', line 163, in saveH5pyData
    for image_names_chunk, label_chunk, previous_state_chunk in gen:
RuntimeError: generator raised StopIteration

Process finished with exit code 1

这是因为代码中手动抛出了异常,把该行代码注释掉即可。

def generatorForH5py(data_mappings, chunk_size=32):
    ...
    # raise StopIteration

Step 1 - 训练模型

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Lambda, Input, concatenate
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import ELU
from keras.optimizers import Adam, SGD, Adamax, Nadam
from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, CSVLogger, EarlyStopping
import keras.backend as K
from keras.preprocessing import image

from keras_tqdm import TQDMNotebookCallback

import json
import os
import numpy as np
import pandas as pd
from Generator import DriveDataGenerator
from Cooking import checkAndCreateDir
import h5py
from PIL import Image, ImageDraw
import math
import matplotlib.pyplot as plt

# << The directory containing the cooked data from the previous step >>
COOKED_DATA_DIR = 'data_cooked/'

# << The directory in which the model output will be placed >>
MODEL_OUTPUT_DIR = 'model'

读取一些文件

train_dataset = h5py.File(os.path.join(COOKED_DATA_DIR, 'train.h5'), 'r') # https://www.jianshu.com/p/de9f33cdfba0
eval_dataset = h5py.File(os.path.join(COOKED_DATA_DIR, 'eval.h5'), 'r')
test_dataset = h5py.File(os.path.join(COOKED_DATA_DIR, 'test.h5'), 'r')

num_train_examples = train_dataset['image'].shape[0] # 32672
num_eval_examples = eval_dataset['image'].shape[0] # 9344
num_test_examples = test_dataset['image'].shape[0] # 4672

batch_size=32

对于图像数据来说,将整个数据集加载到内存中代价太高。keras中有DataGenerator的概念,DataGenerator本质上是一个迭代器(iterator),它将从磁盘中分块读取数据。这使得保持CPU和GPU繁忙,提高吞吐量。

运用如下训练tricks:

  • 只有一小部分图像是感兴趣的-当生成批时,我们可以删除图像中不感兴趣的部分。
  • 随机水平翻转图像和转角(label)
  • 随机增加或者减小全剧光照信息
  • 在转向角为零的位置随机丢弃一定百分比的数据点,以便模型在训练时看到一个平衡的数据集。
  • 从数据集中的转弯策略中获得示例,以便模型学习如何快速转弯(将数据集进行了分类,Step0已完成)

为了实现上述tricks,我们在继承keras中的ImageDataGenerator类基础上,创建自己的类,这些代码在Generator.py文件中。(本Step末解读)

这里,我们直接利用带有如下参数的生成器:

  • Zero_Drop_Percentage: 0.9。随机丢弃90%的label = 0的数据点
  • Brighten_Range: 0.4。每幅图像的亮度将最多修改40%。图像:RGB到HSV,调整V,再转回RGB
  • ROI: [76,135,0,255]:图像感兴趣区域的x1, x2, y1, y2。
data_generator = DriveDataGenerator(rescale=1./255., horizontal_flip=True, brighten_range=0.4)
train_generator = data_generator.flow    (train_dataset['image'], train_dataset['previous_state'], train_dataset['label'], batch_size=batch_size, zero_drop_percentage=0.95, roi=[76,135,0,255])
eval_generator = data_generator.flow    (eval_dataset['image'], eval_dataset['previous_state'], eval_dataset['label'], batch_size=batch_size, zero_drop_percentage=0.95, roi=[76,135,0,255])

附上述文件导入的本地模块from Generator import DriveDataGenerator的解读:

from keras.preprocessing import image
import numpy as np
import keras.backend as K
import os
import cv2

class DriveDataGenerator(image.ImageDataGenerator):
    def __init__(self, # 初始化方法。调用该类时直接赋值给这些
                 featurewise_center=False,
                 samplewise_center=False,
                 featurewise_std_normalization=False,
                 samplewise_std_normalization=False,
                 zca_whitening=False,
                 zca_epsilon=1e-6,
                 rotation_range=0.,
                 width_shift_range=0.,
                 height_shift_range=0.,
                 shear_range=0.,
                 zoom_range=0.,
                 channel_shift_range=0.,
                 fill_mode='nearest',
                 cval=0.,
                 horizontal_flip=False,
                 vertical_flip=False,
                 rescale=None,
                 preprocessing_function=None,
                 data_format=None,
                 brighten_range=0):
        super(DriveDataGenerator, self).__init__(featurewise_center, # 调用父函数中的初始化方法,接受上述传入的值	
                 samplewise_center,
                 featurewise_std_normalization,
                 samplewise_std_normalization,
                 zca_whitening,
                 zca_epsilon,
                 rotation_range,
                 width_shift_range,
                 height_shift_range,
                 shear_range,
                 zoom_range,
                 channel_shift_range,
                 fill_mode,
                 cval,
                 horizontal_flip,
                 vertical_flip,
                 rescale,
                 preprocessing_function,
                 data_format)
        self.brighten_range = brighten_range

    def flow(self, x_images, x_prev_states = None, y=None, batch_size=32, shuffle=True, seed=None,
             save_to_dir=None, save_prefix='', save_format='png', zero_drop_percentage=0.5, roi=None):
        return DriveIterator(
            x_images, x_prev_states, y, self,
            batch_size=batch_size,
            shuffle=shuffle,
            seed=seed,
            data_format=self.data_format,
            save_to_dir=save_to_dir,
            save_prefix=save_prefix,
            save_format=save_format,
            zero_drop_percentage=zero_drop_percentage,
            roi=roi)
    
    def random_transform_with_states(self, x, seed=None):
        '''Randomly augment a single image tensor.
        # Arguments
            x: 3D tensor, single image.
            seed: random seed.
        # Returns
            A tuple. 0 -> randomly transformed version of the input (same shape). 1 -> true if image was horizontally flipped, false otherwise
        '''
        # x is a single image, so it doesn't have image number at index 0
        img_row_axis = self.row_axis
        img_col_axis = self.col_axis
        img_channel_axis = self.channel_axis

        is_image_horizontally_flipped = False

        # use composition of homographies
        # to generate final transform that needs to be applied
        if self.rotation_range:
            theta = np.pi / 180 * np.random.uniform(-self.rotation_range, self.rotation_range)
        else:
            theta = 0

        if self.height_shift_range:
            tx = np.random.uniform(-self.height_shift_range, self.height_shift_range) * x.shape[img_row_axis]
        else:
            tx = 0

        if self.width_shift_range:
            ty = np.random.uniform(-self.width_shift_range, self.width_shift_range) * x.shape[img_col_axis]
        else:
            ty = 0

        if self.shear_range:
            shear = np.random.uniform(-self.shear_range, self.shear_range)
        else:
            shear = 0

        if self.zoom_range[0] == 1 and self.zoom_range[1] == 1:
            zx, zy = 1, 1
        else:
            zx, zy = np.random.uniform(self.zoom_range[0], self.zoom_range[1], 2)

        transform_matrix = None
        if theta != 0:
            rotation_matrix = np.array([[np.cos(theta), -np.sin(theta), 0],
                                        [np.sin(theta), np.cos(theta), 0],
                                        [0, 0, 1]])
            transform_matrix = rotation_matrix

        if tx != 0 or ty != 0:
            shift_matrix = np.array([[1, 0, tx],
                                     [0, 1, ty],
                                     [0, 0, 1]])
            transform_matrix = shift_matrix if transform_matrix is None else np.dot(transform_matrix, shift_matrix)

        if shear != 0:
            shear_matrix = np.array([[1, -np.sin(shear), 0],
                                    [0, np.cos(shear), 0],
                                    [0, 0, 1]])
            transform_matrix = shear_matrix if transform_matrix is None else np.dot(transform_matrix, shear_matrix)

        if zx != 1 or zy != 1:
            zoom_matrix = np.array([[zx, 0, 0],
                                    [0, zy, 0],
                                    [0, 0, 1]])
            transform_matrix = zoom_matrix if transform_matrix is None else np.dot(transform_matrix, zoom_matrix)

        if transform_matrix is not None:
            h, w = x.shape[img_row_axis], x.shape[img_col_axis]
            transform_matrix = image.transform_matrix_offset_center(transform_matrix, h, w)
            x = image.apply_transform(x, transform_matrix, img_channel_axis,
                                fill_mode=self.fill_mode, cval=self.cval)

        if self.channel_shift_range != 0:
            x = image.random_channel_shift(x,
                                     self.channel_shift_range,
                                     img_channel_axis)
        if self.horizontal_flip:
            if np.random.random() < 0.5:
                x = image.flip_axis(x, img_col_axis)
                is_image_horizontally_flipped = True

        if self.vertical_flip:
            if np.random.random() < 0.5:
                x = image.flip_axis(x, img_row_axis)
                
        if self.brighten_range != 0:
            random_bright = np.random.uniform(low = 1.0-self.brighten_range, high=1.0 self.brighten_range)
            
            #TODO: Write this as an apply to push operations into C for performance
            img = cv2.cvtColor(x, cv2.COLOR_RGB2HSV)
            img[:, :, 2] = np.clip(img[:, :, 2] * random_bright, 0, 255)
            x = cv2.cvtColor(img, cv2.COLOR_HSV2RGB)

        return (x, is_image_horizontally_flipped)

class DriveIterator(image.Iterator):
    '''Iterator yielding data from a Numpy array.

    # Arguments
        x: Numpy array of input data.
        y: Numpy array of targets data.
        image_data_generator: Instance of `ImageDataGenerator`
            to use for random transformations and normalization.
        batch_size: Integer, size of a batch.
        shuffle: Boolean, whether to shuffle the data between epochs.
        seed: Random seed for data shuffling.
        data_format: String, one of `channels_first`, `channels_last`.
        save_to_dir: Optional directory where to save the pictures
            being yielded, in a viewable format. This is useful
            for visualizing the random transformations being
            applied, for debugging purposes.
        save_prefix: String prefix to use for saving sample
            images (if `save_to_dir` is set).
        save_format: Format to use for saving sample images
            (if `save_to_dir` is set).
    '''

    def __init__(self, x_images, x_prev_states, y, image_data_generator,
                 batch_size=32, shuffle=False, seed=None,
                 data_format=None,
                 save_to_dir=None, save_prefix='', save_format='png', zero_drop_percentage = 0.5, roi = None):
        if y is not None and len(x_images) != len(y):
            raise ValueError('X (images tensor) and y (labels) '
                             'should have the same length. '
                             'Found: X.shape = %s, y.shape = %s' %
                             (np.asarray(x_images).shape, np.asarray(y).shape))

        if data_format is None:
            data_format = K.image_data_format()
        
        self.x_images = x_images
        
        self.zero_drop_percentage = zero_drop_percentage
        self.roi = roi
        
        if self.x_images.ndim != 4:
            raise ValueError('Input data in `NumpyArrayIterator` '
                             'should ave rank 4. You passed an array '
                             'with shape', self.x_images.shape)
        channels_axis = 3 if data_format == 'channels_last' else 1
        if self.x_images.shape[channels_axis] not in {1, 3, 4}:
            raise ValueError('NumpyArrayIterator is set to use the '
                             'data format convention ''   data_format   '' '
                             '(channels on axis '   str(channels_axis)   '), i.e. expected '
                             'either 1, 3 or 4 channels on axis '   str(channels_axis)   '. '
                             'However, it was passed an array with shape '   str(self.x_images.shape)  
                             ' ('   str(self.x_images.shape[channels_axis])   ' channels).')
        if x_prev_states is not None:
            self.x_prev_states = x_prev_states
        else:
            self.x_prev_states = None

        if y is not None:
            self.y = y
        else:
            self.y = None
        self.image_data_generator = image_data_generator
        self.data_format = data_format
        self.save_to_dir = save_to_dir
        self.save_prefix = save_prefix
        self.save_format = save_format
        self.batch_size = batch_size
        super(DriveIterator, self).__init__(x_images.shape[0], batch_size, shuffle, seed)

    def next(self):
        '''For python 2.x.

        # Returns
            The next batch.
        '''
        # Keeps under lock only the mechanism which advances
        # the indexing of each batch.
        with self.lock:
            index_array = next(self.index_generator)
        # The transformation of images is not under thread lock
        # so it can be done in parallel

        return self.__get_indexes(index_array)

    def __get_indexes(self, index_array):
        index_array = sorted(index_array)
        if self.x_prev_states is not None:
            batch_x_images = np.zeros(tuple([self.batch_size]  list(self.x_images.shape)[1:]),
                                      dtype=K.floatx())
            batch_x_prev_states = np.zeros(tuple([self.batch_size] list(self.x_prev_states.shape)[1:]), dtype=K.floatx())
        else:
            batch_x_images = np.zeros(tuple([self.batch_size]   list(self.x_images.shape)[1:]), dtype=K.floatx())

        if self.roi is not None:
            batch_x_images = batch_x_images[:, self.roi[0]:self.roi[1], self.roi[2]:self.roi[3], :]
            
        used_indexes = []
        is_horiz_flipped = []
        for i, j in enumerate(index_array):
            x_images = self.x_images[j]
            
            if self.roi is not None:
                x_images = x_images[self.roi[0]:self.roi[1], self.roi[2]:self.roi[3], :]
            
            transformed = self.image_data_generator.random_transform_with_states(x_images.astype(K.floatx()))
            x_images = transformed[0]
            is_horiz_flipped.append(transformed[1])
            x_images = self.image_data_generator.standardize(x_images)
            batch_x_images[i] = x_images

            if self.x_prev_states is not None:
                x_prev_states = self.x_prev_states[j]
                
                if (transformed[1]):
                    x_prev_states[0] *= -1.0
                
                batch_x_prev_states[i] = x_prev_states
            
            used_indexes.append(j)

        if self.x_prev_states is not None:
            batch_x = [np.asarray(batch_x_images), np.asarray(batch_x_prev_states)]
        else:
            batch_x = np.asarray(batch_x_images)
            
        if self.save_to_dir:
            for i in range(0, self.batch_size, 1):
                hash = np.random.randint(1e4)
               
                img = image.array_to_img(batch_x_images[i], self.data_format, scale=True)
                fname = '{prefix}_{index}_{hash}.{format}'.format(prefix=self.save_prefix,
                                                                        index=1,
                                                                        hash=hash,
                                                                        format=self.save_format)
                img.save(os.path.join(self.save_to_dir, fname))

        batch_y = self.y[list(sorted(used_indexes))]
        idx = []
        for i in range(0, len(is_horiz_flipped), 1):
            if batch_y.shape[1] == 1:
                if (is_horiz_flipped[i]):
                    batch_y[i] *= -1
                    
                if (np.isclose(batch_y[i], 0)):
                    if (np.random.uniform(low=0, high=1) > self.zero_drop_percentage):
                        idx.append(True)
                    else:
                        idx.append(False)
                else:
                    idx.append(True)
            else:
                if (batch_y[i][int(len(batch_y[i])/2)] == 1):
                    if (np.random.uniform(low=0, high=1) > self.zero_drop_percentage):
                        idx.append(True)
                    else:
                        idx.append(False)
                else:
                    idx.append(True)
                
                if (is_horiz_flipped[i]):
                    batch_y[i] = batch_y[i][::-1]

        batch_y = batch_y[idx]
        batch_x[0] = batch_x[0][idx]
        batch_x[1] = batch_x[1][idx]
        
        return batch_x, batch_y
        
    def _get_batches_of_transformed_samples(self, index_array):
        return self.__get_indexes(index_array)
        

第一个bug:

Traceback (most recent call last):
  File '/home/wqf/下载/pycharm-community-2020.3/plugins/python-ce/helpers/pydev/pydevd.py', line 1477, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File '/home/wqf/下载/pycharm-community-2020.3/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py', line 18, in execfile
    exec(compile(contents '\n', file, 'exec'), glob, loc)
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/train.py', line 51, in <module>
    data_generator = DriveDataGenerator(rescale=1. / 255., horizontal_flip=True,
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/Generator.py', line 36, in __init__
    super(DriveDataGenerator, self).__init__(featurewise_center,
  File '/home/wqf/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/preprocessing/image.py', line 783, in __init__
    super(ImageDataGenerator, self).__init__(
  File '/home/wqf/anaconda3/lib/python3.8/site-packages/keras_preprocessing/image/image_data_generator.py', line 363, in __init__
    raise ValueError(
ValueError: `brightness_range should be tuple or list of two floats. Received: 0.0

Process finished with exit code 1

改正:

class DriveDataGenerator(image.ImageDataGenerator):
    ……
    brighten_range=None(原来是0,会导致第一个错)

后来发现会一直报错,经github上issue的高人点拨,发现是版本不对,于是切换环境为python3.6 keras2.1.2
(keras好像不同版本之间的兼容性不太好)

https://github.com/microsoft/AutonomousDrivingCookbook/issues/89

在这里插入图片描述
我发现CSDN给我吞了好多。。。我明明保存了的。。。。看来以后还是要发表出去,不能留在草稿箱。。。。

def draw_image_with_label(img, label, prediction=None):
    theta = label * 0.69 #Steering range for the car is  - 40 degrees -> 0.69 radians。方向盘转角范围是40度,即0.69弧度。label在Step0中被归一化到[-1,1]了,这里转换回来。
    line_length = 50
    line_thickness = 3
    label_line_color = (255, 0, 0)
    prediction_line_color = (0, 0, 255)
    pil_image = image.array_to_img(img, K.image_data_format(), scale=True)
    print('Actual Steering Angle = {0}'.format(label))
    draw_image = pil_image.copy()
    image_draw = ImageDraw.Draw(draw_image)
    first_point = (int(img.shape[1]/2),img.shape[0])
    second_point = (int((img.shape[1]/2)   (line_length * math.sin(theta))), int(img.shape[0] - (line_length * math.cos(theta))))
    image_draw.line([first_point, second_point], fill=label_line_color, width=line_thickness)
    
    if (prediction is not None):
        print('Predicted Steering Angle = {0}'.format(prediction))
        print('L1 Error: {0}'.format(abs(prediction-label)))
        theta = prediction * 0.69
        second_point = (int((img.shape[1]/2)   (line_length * math.sin(theta))), int(img.shape[0] - (line_length * math.cos(theta))))
        image_draw.line([first_point, second_point], fill=prediction_line_color, width=line_thickness)
    
    del image_draw
    plt.imshow(draw_image)
    plt.show()

[sample_batch_train_data, sample_batch_test_data] = next(train_generator)
for i in range(0, 3, 1):
    draw_image_with_label(sample_batch_train_data[0][i], sample_batch_test_data[i])

在这里插入图片描述下面定义网络:

image_input_shape = sample_batch_train_data[0].shape[1:]
state_input_shape = sample_batch_train_data[1].shape[1:]
activation = 'relu'

#Create the convolutional stacks
pic_input = Input(shape=image_input_shape)

img_stack = Conv2D(16, (3, 3), name='convolution0', padding='same', activation=activation)(pic_input)
img_stack = MaxPooling2D(pool_size=(2,2))(img_stack)
img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution1')(img_stack)
img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack)
img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution2')(img_stack)
img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack)
img_stack = Flatten()(img_stack)
img_stack = Dropout(0.2)(img_stack)

#Inject the state input
state_input = Input(shape=state_input_shape)
merged = concatenate([img_stack, state_input])

# Add a few dense layers to finish the model
merged = Dense(64, activation=activation, name='dense0')(merged)
merged = Dropout(0.2)(merged)
merged = Dense(10, activation=activation, name='dense2')(merged)
merged = Dropout(0.2)(merged)
merged = Dense(1, name='output')(merged)

adam = Nadam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model = Model(inputs=[pic_input, state_input], outputs=merged)
model.compile(optimizer=adam, loss='mse')
model.summary()

在这里插入图片描述
运用如下回调函数:

  • ReduceLrOnPlateau
  • CsvLogger
  • ModelCheckpoint
  • EarlyStopping
plateau_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, min_lr=0.0001, verbose=1)
checkpoint_filepath = os.path.join(MODEL_OUTPUT_DIR, 'models', '{0}_model.{1}-{2}.h5'.format('model', '{epoch:02d}', '{val_loss:.7f}'))
checkAndCreateDir(checkpoint_filepath)
checkpoint_callback = ModelCheckpoint(checkpoint_filepath, save_best_only=True, verbose=1)
csv_callback = CSVLogger(os.path.join(MODEL_OUTPUT_DIR, 'training_log.csv'))
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=10, verbose=1)
callbacks=[plateau_callback, csv_callback, checkpoint_callback, early_stopping_callback, TQDMNotebookCallback()]

开始训练模型

history = model.fit_generator(train_generator, steps_per_epoch=num_train_examples//batch_size, epochs=500, callbacks=callbacks,                   validation_data=eval_generator, validation_steps=num_eval_examples//batch_size, verbose=2)

结果可视化

[sample_batch_train_data, sample_batch_test_data] = next(train_generator)
predictions = model.predict([sample_batch_train_data[0], sample_batch_train_data[1]])
for i in range(0, 3, 1):
    draw_image_with_label(sample_batch_train_data[0][i], sample_batch_test_data[i], predictions[i])

在这里插入图片描述

Step 2 - 测试模型(可以不用看)

首先要配置环境,又是漫漫长路。
所幸路上有高人指点:

https://blog.csdn.net/mangohhhh/article/details/107215512

不得不说有能力的话还是看官方文档最直接,也少了一些不必要的弯路:

https://microsoft./AirSim/build_linux/

当然中间也必不可少的会出一些bug:

比如说git clone Airsim时下载到15%就会报错。经过如下修改,成功突破了15%这条坎。

https://blog.csdn.net/haockl/article/details/103846695

原因
由于git默认缓存大小不足导致的,使用下面的命令增加缓存大小
解决:

git config --global http.postBuffer 20000000

在这里插入图片描述第二个BUG:
在执行Airsim中的./setup.sh时,出现如下错误:

在这里插入图片描述这里参考如下博客:

https://blog.csdn.net/qq_44717317/article/details/103192013

在这里插入图片描述
在这里插入图片描述成功解决。

第三个不算bug的bug:car_assets.zip正如上述博客中所说,下载巨慢,解决方案:

  • 选择不安装(如果用不到汽车仿真)
  • 在上述博客中的百度云盘下载,之后进行一系列修改
  • 多尝试几次

第四个bug:
执行./build.sh时,会报错。
在这里插入图片描述官网说要用clang8,于是安装。

在这里插入图片描述
之前鼓捣GPU时,clang是6.0,现在需要8。这二者切换,可以参考如下博客:

https://blog.csdn.net/dumpdoctorwang/article/details/84567757

在这里插入图片描述
经过上述一顿操作后,执行./build.sh还是会报错。。。。
于是在github上找到了如下办法:

https://github.com/microsoft/AirSim/issues/2417

在这里插入图片描述虽然问题描述和自己的不太一样,但终归都是在./build.sh环节出的错误,抱着试一试的心态:

。。。

。。。

。。。

还是不行,经过自己两天的操作无果后,我决定给自己挖个坑。就这样吧。

好像和历史遗留问题有关,自己之前在配置GPU环境时,明明已经配置好3060TI的显卡,但是还是显示如下:
在这里插入图片描述于是自己在运行UE时,也出现如下问题:
在这里插入图片描述

cannot find a compatible vulkan device or driver.try updating your viedo driver to a more recent version and make sure your viedeo card supports vulank

同时Airsim的编译也有问题,所以自己决定先放。。。一。。。下。。。吧。
待到以后再解决吧。。。

运行./build.sh报错信息如下,这里记录吧。

  debug=false
  [[ 0 -gt 0 ]]
  '[' '!' -d ./external/rpclib/rpclib-2.2.1 ']'
  '[' -d ./cmake_build ']'
   which cmake
  CMAKE=/usr/bin/cmake
  false
  build_dir=build_release
   uname
  '[' Linux == Darwin ']'
  export CC=clang-8
  CC=clang-8
  export CXX=clang  -8
  CXX=clang  -8
  [[ -d ./AirLib/deps/eigen3/Eigen ]]
  echo 'putting build in build_release folder, to clean, just delete the directory...'
putting build in build_release folder, to clean, just delete the directory...
  [[ -f ./cmake/CMakeCache.txt ]]
  [[ -d ./cmake/CMakeFiles ]]
  folder_name=
  [[ ! -d build_release ]]
  mkdir -p build_release
  pushd build_release
  false
  folder_name=Release
  /usr/bin/cmake ../cmake -DCMAKE_BUILD_TYPE=Release
-- The C compiler identification is Clang 8.0.0
-- The CXX compiler identification is Clang 8.0.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/clang-8 - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - failed
-- Check for working CXX compiler: /usr/bin/clang  -8
-- Check for working CXX compiler: /usr/bin/clang  -8 - broken
CMake Error at /usr/share/cmake-3.19/Modules/CMakeTestCXXCompiler.cmake:59 (message):
  The C   compiler

    '/usr/bin/clang  -8'

  is not able to compile a simple test program.

  It fails with the following output:

    Change Dir: /home/wqf/AirSim/build_release/CMakeFiles/CMakeTmp
    
    Run Build Command(s):/usr/bin/make cmTC_e514f/fast && /usr/bin/make  -f CMakeFiles/cmTC_e514f.dir/build.make CMakeFiles/cmTC_e514f.dir/build
    make[1]: 进入目录“/home/wqf/AirSim/build_release/CMakeFiles/CMakeTmp”
    Building CXX object CMakeFiles/cmTC_e514f.dir/testCXXCompiler.cxx.o
    /usr/bin/clang  -8    -o CMakeFiles/cmTC_e514f.dir/testCXXCompiler.cxx.o -c /home/wqf/AirSim/build_release/CMakeFiles/CMakeTmp/testCXXCompiler.cxx
    Linking CXX executable cmTC_e514f
    /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_e514f.dir/link.txt --verbose=1
    /usr/bin/clang  -8 CMakeFiles/cmTC_e514f.dir/testCXXCompiler.cxx.o -o cmTC_e514f 
    /usr/bin/ld: 找不到 -lstdc  
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    CMakeFiles/cmTC_e514f.dir/build.make:105: recipe for target 'cmTC_e514f' failed
    make[1]: *** [cmTC_e514f] Error 1
    make[1]: 离开目录“/home/wqf/AirSim/build_release/CMakeFiles/CMakeTmp”
    Makefile:140: recipe for target 'cmTC_e514f/fast' failed
    make: *** [cmTC_e514f/fast] Error 2
    
    

  

  CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
  CMakeLists.txt:2 (project)


-- Configuring incomplete, errors occurred!
See also '/home/wqf/AirSim/build_release/CMakeFiles/CMakeOutput.log'.
See also '/home/wqf/AirSim/build_release/CMakeFiles/CMakeError.log'.
  popd
~/AirSim ~/AirSim
  rm -r build_release
  exit 1

同时也推荐两个比较好的环境配置博文:

https://blog.csdn.net/weixin_39059031/article/details/84028487
https://blog.csdn.net/mangohhhh/article/details/107215512

上面推荐过,看人家的操作怎么就这么得心应手。。。哎

。。。

。。。

。。。





之前我成功再Windows上安装了Airsim(哈哈,但是忘了怎么装的了。。。。)

可能是windows下集成的比较好?官方已经编译好了,而在linux下是作为UE的一个插件的?如果是这样的话,那我果然不适合使用linux。

要么是因为我之前装Carla时和Undicaty时已经配置好环境了?

哎。。。心累

—————————————————————————分割线——————————————————————————————
2020.3.10更新
我又回来填坑了。
之前自己的配置过程是自己编译源码。其实配置环境的方式有两种:
1.编译源码(适合懂计算机等各种的大佬)
(上述两个别人配置环境的引用即是选择的这种方式)
2.直接使用编译好的文件。

之前自己没有认真阅读官方文档。这次去官网好好看了下。建议像我一样的老白选择“Download Binaries”选项。
官网

在这里插入图片描述下载好之后,解压文件,直接运行即可。

在这里插入图片描述
这样便算是配置好了环境。

言归正传

Step 2 - 测试模型

Windows下:

代码解读如下:

from keras.models import load_model
import sys
import numpy as np
import glob
import os

if ('../../PythonClient/' not in sys.path): # sys.path是python的搜索模块的路径集,是一个list。
# 执行前:['D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\python36.zip', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\DLLs', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4', '', 'C:\\Users\\文强\\AppData\\Roaming\\Python\\Python36\\site-packages', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\win32', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\win32\\lib', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\Pythonwin', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\文强\\.ipython']
    sys.path.insert(0, '../../PythonClient/')
# 执行后:['../../PythonClient/', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\python36.zip', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\DLLs', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4', '', 'C:\\Users\\文强\\AppData\\Roaming\\Python\\Python36\\site-packages', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\win32', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\win32\\lib', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\Pythonwin', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\文强\\.ipython']

# 就是添加了个路径,用于后续搜索模块

from AirSimClient import * # 导入本地模块

# << Set this to the path of the model >>
# If None, then the model with the lowest validation loss from training will be used
MODEL_PATH = None

if (MODEL_PATH == None):
    models = glob.glob('model/models/*.h5') # glob.glob:查找符合特定规则的文件路径名。https://blog.csdn.net/georgeai/article/details/81035422
    best_model = max(models, key=os.path.getctime) # 按照最后时间取模型
    MODEL_PATH = best_model
    
print('Using model {0} for testing.'.format(MODEL_PATH))

在进行如下代码之前,要保证你的仿真进行器是开着的

model = load_model(MODEL_PATH)

client = CarClient() # 实例化一个车类
client.confirmConnection() # 确认是否连接成功
client.enableApiControl(True) # 使键盘控制模式转换到API模式
car_controls = CarControls() # 在API模式下,通过该类来控制汽车
print('Connection established!')

下述代码将设置car的初始状态,以及一些用于存储模型输出的缓冲区

car_controls.steering = 0
car_controls.throttle = 0
car_controls.brake = 0

image_buf = np.zeros((1, 59, 255, 3))
state_buf = np.zeros((1,4))

定义一个helper函数来从AirSim读取RGB图像,并准备让模型使用它

def get_image():
    image_response = client.simGetImages([ImageRequest(0, AirSimImageType.Scene, False, False)])[0]
    image1d = np.fromstring(image_response.image_data_uint8, dtype=np.uint8)
    image_rgba = image1d.reshape(image_response.height, image_response.width, 4)
    
    return image_rgba[76:135,0:255,0:3].astype(float)

运用控制块来运行小车。因为我们的模型不能预测速度,所以我们将试图保持汽车以5米/秒的恒定速度行驶。运行下面的块将使得模型驾驶汽车!

while (True):
    car_state = client.getCarState()
    
    if (car_state.speed < 5):
        car_controls.throttle = 1.0
    else:
        car_controls.throttle = 0.0
    
    image_buf[0] = get_image()
    state_buf[0] = np.array([car_controls.steering, car_controls.throttle, car_controls.brake, car_state.speed])
    model_output = model.predict([image_buf, state_buf])
    car_controls.steering = round(0.5 * float(model_output[0][0]), 2)
    
    print('Sending steering = {0}, throttle = {1}'.format(car_controls.steering, car_controls.throttle))
    
    client.setCarControls(car_controls)

关于github的issue中为什么会有人去说换一个编译器,可能出自下述页面:
在这里插入图片描述

关于那个历史遗留问题,其实是没有问题的。只不过远程时由于使用的协议,调用不了GPU渲染出图形界面。详见我的下述博文文末:

配置深度强化学习环境(Ubuntu18.04 3060TI显卡 远程控制)

在这里插入图片描述
现在看这个模型确实象征意义更大一些,没有特别复杂的地方。就这样吧,完结。

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多