CSPRS

系統管理

期刊內容

26卷 / 3期 [ 點閱率 : 1695 ]
華藝線上圖書館全期下載(Full Issue Download) : 26(3).pdf
請記下華藝線上圖書館帳號(JPRS000002),密碼(8YqFKj),以便下載文章時使用,感謝您的支持 !

Pages : 127-141    應用轉移學習從移動式光達點雲影像中萃取並分類路面標記
論文名稱 應用轉移學習從移動式光達點雲影像中萃取並分類路面標記
Title Road Marking Extraction and Classification from Mobile LiDAR Point Clouds Derived Imagery Using Transfer Learning
作者 賴格陸、曾義星
Author Miguel Luis R. Lagahit, Yi-Hsing Tseng
中文摘要 高精地圖是輔助自動駕駛車所需的高精度3D地圖,目前應用移動式測繪資料自動化測製高精地圖仍是挑戰,本文提出應用轉移學習 (Transfer Learning) 從移動式光達點雲自動萃取並分類道路標記的方法,其資料處理流程包括前處理、訓練、萃取分類、及精度評估,前處理是先過濾非路面點雲再將點雲轉換為網格式的強度值影像。訓練過程是從選取的訓練資料進行手動註釋和拆分,建立訓練和測試數據集,訓練數據集可採既有的公開資料庫,再利用現有訓練資料擴充。之後運用訓練好的機器學習模型從光達強度影像中萃取分類路面標記,然後以人工判讀的成果為參考評估測試成果精度,先評估萃取的正確度、錯誤率、及F1指標,進而評估分類的誤差率,最後將分類的點雲向量化。結果顯示,以5 cm解析度的光達強度影像來預訓練U-Net模型最好。基於F1指標低且誤差率低於15%,驗證所提方法可成功萃取並分類道路標記,其測試成效與最近發表的論文成果相當。然而,所提方法之萃取完整度優於所比較的方法,但分類精度則不如所比較的方法,主要原因是本研究同時進行萃取及分類,而比較的方法則先萃取,進而濾除雜訊點群後再進行分類。建議未來研究可將萃取和分類過程分開,增加濾除機制,以降低分類誤差率。
Abstract High Definition (HD) Maps are highly accurate 3D maps that contain features on or nearby the road that assist with navigation in Autonomous Vehicles (AVs). One of the main challenges when making such maps is the automatic extraction and classification of road markings from mobile mapping data. In this paper, a methodology is proposed to use transfer learning to extract and classify road markings from mobile LiDAR. The data procedure includes preprocessing, training, class extraction and accuracy assessment. Initially, point clouds were filtered and converted to intensity-based images using several grid-cell sizes. Then, it was manually annotated and split to create the training and testing datasets. The training dataset has undergone augmentation before serving as input for evaluating multiple openly available pre-trained neural network models. The models were then applied to the testing dataset and assessed based on their precision, recall, and F1 scores for extraction as well as their error rates for classification. Further processing generated classified point clouds and polygonal vector shapefiles. The results indicate that the best model is the pre-trained U-Net model trained from the intensity-based images with a 5 cm resolution among the other models and training sets that were used. It was able to achieve F1 scores that are comparable with recent work and error rates that are below 15%. However, the classification results are still around two to four times greater than those of recent work and as such, it is recommended to separate the extraction and classification procedures, having a step in between to remove misclassifications.
關鍵字 移動光達、道路標記、萃取、分類、轉移學習
Keywords Mobile LiDAR, Road Marking, Extraction, Classification, Transfer Learning
Pages : 143-162    以生物序列演算法進行UAV影像幾何校正控制點匹配新型模式之探索性研究
論文名稱 以生物序列演算法進行UAV影像幾何校正控制點匹配新型模式之探索性研究
Title Exploratory Research of a Novel GCPs Matching Model for UAV Image Geometric Correction through Biological Sequence Algorithms
作者 雷祖強、吳仕傑、李哲源、曾國欣
Author Tsu-Chiang Lei, Shih-Chieh Wu, Che-Yuan Li, Guo-Shin Tzeng
中文摘要 本研究開發了一種新穎的半自動地面控制點 (Ground Control Points, GCPs) 匹配模型來解決UAV影像校正問題。我們使用生物序列演算法 (Biological sequence algorithms) 為概念來進行影像匹配程序,其概念則是透過Needleman-Wunsch algorithm (NWA) 的全局特徵對齊技術,匹配兩個影像 (基準影像和待校正影像) 中的物件對象,在識別成功匹配的物件對象後,再利用Smith-Waterman algorithm (SWA) 的局部特徵對齊技術,從匹配成功的物件對象中提取GCPs,最後,再使用多項式模型方法對於所提出GCPs進行幾何校正與價值評估。研究的案例成果顯示,除了可從本研究中所使用的影像中自動提取適當的GCPs之外,而影像進行幾何校正後,經由人工刪去殘差值大於1個單位的控制點後,剩餘控制點的RMSE (均方根誤差) 值為0.52418,可證明本研究之方法未來可適用於高解析度影像之GCPs校正問題。
Abstract This study developed a novel semi-automatic ground control point (GCPs) matching model, which can resolve the problem of GCPs matching when carrying out geometric correction for two UAV images. This research methods utilized the concept of the Biological Sequence Algorithms (BSA) to present image matching procedures. More specifically, the Needleman-Wunsch algorithm (NWA) was first used as a global object alignment technique to match objects in the two images (corrected image and uncorrected image). After identifying the successfully matched objects, the Smith-Waterman Algorithm (SWA) was used as a local features alignment technique to extract the GCPs from matched objects. Finally, the polynomial model method was applied for geometric correction and assessment of the proposed model. Finally, the results of this case showed that appropriate GCPs were automatically extracted from the images used in this study. After the geometric correction, the RMSE (Root-Mean-Square Error) value was 0.52418, indicating the method of this study is appropriate for the application on high-resolution images.
關鍵字 影像幾何校正、生物序列分析、自動化匹配、無人載具
Keywords Image Geometric Correction, Biological Sequence Algorithms, GCPs Automatically Matching Procedure
12
Page size:
select
Page: of 2
Items 1 to 2 of 4