A review on underwater autonomous environmental perception and target grasp, the challenge of robotic organism capture

A review on underwater autonomous environmental perception and target grasp, the challenge of robotic organism capture

Ocean Engineering xxx (xxxx) xxx Contents lists available at ScienceDirect Ocean Engineering journal homepage: www.elsevier.com/locate/oceaneng Rev...

3MB Sizes 0 Downloads 2 Views

Ocean Engineering xxx (xxxx) xxx

Contents lists available at ScienceDirect

Ocean Engineering journal homepage: www.elsevier.com/locate/oceaneng

Review

A review on underwater autonomous environmental perception and target grasp, the challenge of robotic organism capture Hai Huang a, Qirong Tang b, Jiyong Li a, Wanli Zhang a, Xuan Bao a, Haitao Zhu a, Gang Wang a, * a b

National Key Laboratory of Science and Technology on Underwater Vehicle, Harbin Engineering University, Harbin, 150001, China Laboratory of Robotics and Multibody System, Tongji University, Shanghai, 201804, China

A R T I C L E I N F O

A B S T R A C T

Keywords: Underwater vehicle Environmental autonomous perception Underwater vehicle manipulator system Target capture

The worldwide demands for high protein seafood are growing rapidly. Through robots instead of divers completing the capture will not only reduce the cost, but also improve the operation safety. Although a great number of underwater vehicles have been developed, it is still difficult for them to realize agile and uninjured sea organism capturing autonomously. This paper is proposed to investigate the current state of art about under­ water autonomous and dexterous operation robot, underwater autonomous environmental perception, under­ water vehicle-manipulator system modeling and coordinated control, target uninjured grasp; in order to analyze the difficulties and research methodologies for the robot development of sea organism autonomous and unin­ jured capture. To make analysis on the organism autonomous capture process, this article has presented two different open frame underwater vehicles, the absorption type one and the grasp type one. Analysis including autonomous visual search and detection process, acceleration-level task priority redundancy resolution approach for trajectory planning and grasp capture. Experiments with the organism autonomous grasp and absorption process have both been purposed.

1. Introduction The worldwide demands for high protein seafood, marine drugs, healthy products such as seashell and sea cucumber, are growing rapidly. According to the statistics form Pacific Sea Urchin and Cu­ cumber Harvest Association, Chinese annual imports of high protein seafood currently amount to tens of billions dollars and are increasing by 10% annually (Geoff Krause, 2014). In comparisons, excessive artificial breeding could bring about serious impacts on the environment, but natural produced organism is superior. Currently, offshore natural or­ ganisms such as sea cucumber, seashell, urchin, etc. are mostly captured by divers. However, diver fishing is not only limited by duration time in relative deep water (only a few hours of duration at the 20 m depth), but suffers from hypothermia disease and life hazards. Robotic capturing, on the contrary, will not only reduce the fishing cost, but also improve the operation safety. Some Remotely Operated Vehicles (ROVs) have been developed, for example, the submarine harvesting ROV of Norway (see Fig. 1), which could capture large number of sea urchin through pilot controlled underwater vacuum absorption (Sten Ivar Siikavuopio, 2010). But the vacuum absorption could cause submarine vegetation

environmental damage and silt carrying; currently the fishing speed of remotely operating capture is still limited. Nowadays, a great number of underwater vehicles have been developed for the applications of submarine rescue, oceanic engineer­ ing, scientific expedition etc. Highly integrated underwater robots have been invented. For example, the REMUS-100 AUV (Autonomous Un­ derwater Vehicle) (Kongsberg Company), Bluefin AUV (General Dy­ namics Mission Systems, Inc.), Autosub AUV (Autonomous Undersea Vehicle Applications Center), Videoray ROV (VideoRay LLC), oceanic bottom flying node (Hongde Qin et al., 2019) etc. are applied for un­ derwater observation and survey; H300 ROV of French ECA group (ECA group), SMD (Soil Machine Dynamics Ltd) working class ROV, SAUVIM I-AUV (Intervention AUV) (Giacomo Marani et al., 2009), Girona 500 I-AUV (Ribas, D. et al., 2015) and so forth, are developed for the field manipulation operations. Currently, submarine manipulations are mainly undertaken by manned submersibles and ROVs (Hu et al., 2016), both of which are equipped with manipulators. Their operations are realized through pilot operation or remotely control. ROVs are launched from specific mother vessels with an umbilical cable for communication and energy supply;

* Corresponding author. E-mail address: [email protected] (G. Wang). https://doi.org/10.1016/j.oceaneng.2019.106644 Received 13 May 2019; Received in revised form 25 October 2019; Accepted 26 October 2019 0029-8018/© 2019 Elsevier Ltd. All rights reserved.

Please cite this article as: Hai Huang, Ocean Engineering, https://doi.org/10.1016/j.oceaneng.2019.106644

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx

Fig. 1. Submarine harvesting ROV (Copyright Norway submarine harvester).

Fig. 2. Comparisons between the operation of a diver and robot. Fig. 3. Manipulator coordination of Amadeus (Casalino G. et al., 2001).

they are normally large and heavy for transportation and handling, with complex interface that could cause pilot fatigue after long hours of op­ erations (Mario Prats et al., 2011). Both I-AUV and remotely autonomous underwater manipulation vehicles represent new concepts of underwater autonomous operation robot (Giacomo Marani et al. 2014). They are developed to complete environmental perception, location, analysis, decision making and operative missions autonomously and independently in complicated ~ et al., 2014). Different from ROVs, AUVs environment (Corina Barb~ alata are normally cruising with self-carried battery and streamline configu­ ration. They are superior in gyration and cruising. However, in order to realize autonomous underwater operation, researchers should not only improve robot intelligence, precise control and environmental percep­ tion, but also overcome the disturbances from manipulator and oceanic current (Zhang et al., 2012). To the best of our literature, only minority research institutes have been engaged in this field. The realization of underwater robotic organism capture missions requires overcoming some novel difficulties. Since the lights are scat­ tered and absorbed by silt, hydrone and particles, the images are grayed, obscured contrast reduced (Gianluca Antonelli et al., 2014). On the other hand, the sea cucumbers are soft and smooth organism, sea shell absorbs itself on the rock for the protection once scared, both of them living close to the rocks with appearances resemble to their circum­ stance. It is difficult for these existing underwater robots to autono­ mously percept complicated organism habitat and afterward, realize rapid and uninjured grasp (see Fig. 2). Therefore, National Natural Science Foundation of China (NSFC) launched a group of key projects of underwater vehicle environmental perception and target capture to realize robotic capturing in 2016. The main objectives of these projects are to develop low cost underwater robot capturing platforms on the basis of the following researches: ① environmental perception of underwater landscapes, plants and cur­ rents; ② rapid detection, recognition and tracking of target organism; ③ aliveness and rapid capture of target organism. From these researches, not only the oceanic organism harvest is expected to be artificialized and mechanized, but also the underwater rescue, oceanic engineering and submarine exploring are propelled.

Therefore, this paper will investigate the challenge and development of underwater robot for autonomous environmental perception and target organism capture. This article will review the development of underwater autonomous manipulation robot including I-AUV and remotely autonomous underwater vehicles and then propose some method for autonomous robotic organism capture. It will be organized into five sections. Following the introduction, the state of art analysis will be proposed in section 2. The analysis of autonomous capture process for underwater robot will be investigated in section 3. In section 4, simulations and experiments are performed to verify the proposed methodologies. Finally, conclusions will be made in Section 5. 2. State of art analysis 2.1. Underwater autonomous manipulation robot literature From 1990s, researchers began to devote their attentions to under­ water dexterous and autonomous operations. In AMADEUS project, two

Fig. 4. Underwater dual arm vehicle manipulator system. 2

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx

Fig. 5. Milestones progress for I-AUV.

7-DOF (Degree Of Freedom) electrical underwater arms with three hy­ draulic fingered grippers were developed for dexterous operations (see Fig. 3). With force and tactile sensor on the fingertip of end effectors, this project performed coordinated manipulating experiments on rigid ob­ ject in the water tank (Casalino G. et al., 2001). Recently, some more underwater dual arm vehicle manipulator systems have been invented for remotely autonomous operation. Under the project of European Commission’s Horizon 2020 Framework, Ger­ many, Italy, Switherland and Belgium have developed an underwater dual arm remotely autonomous operation system for dexterous manip­ ulation under vision touch and multi-sensor system (see Fig. 4(a)) (Andreas Birk et al., 2018). Workgroup of Stanford University has invented a multisensory humanoid robotic diver-Ocean One for oceanic discovery (see Fig. 4(b)). It is equipped with a pair of 7-DOF electrical, compliant and a torque-controlled arms with gentle hands (Oussama Khatib et al., 2016). These robots have made a great progress in un­ derwater dexterous operation, but it is still partly dependent on the operator guidance with limited observation and operation range. Different from ROVs, AUVs should be provided with precise dead reckoning navigation and control techniques to realize autonomous cruising and operations. Although in the projects of UNION (Rigaud, V. et al., 1998) and SWIMMER (Evans, J. et al., 2001) ROVs were studied to increase autonomy and intelligence with manipulators co-ordination, the operation range of these ROVs is limited. The development of ALIVE, SAUVIM and GIRONA 500 I-AUVs symbolizes three milestone progresses for I-AUV (Pere Ridaoa et al. 2015) (see Fig. 5). The 3.5T ALIVE was equipped with two hydraulic docking manipulators and a 7 DOF arm. Its design purpose is autono­ mous navigation, docking and opening a valve for the petroleum in­ dustry (Evans, J. et al., 2001). Equipped with ANSALDO arm of the AMADEUS project, SAUVIM (6T) was purposed for free floating un­ derwater operation. It established autonomous manipulation control architecture based on the visual information processing; for the first time realized autonomous target moving and hook securing in the oceanic environment (Giacomo Marani et al. 2014). On the basis of TRIDENT (Enrico Simetti et al., 2014) project, GIRONA 500 was developed with a 7 DOF manipulator and 3 fingered hand. It was pur­ posed for a multisensory based free floating manipulation. Since GIRONA 500 (<200 Kg) is much smaller than the last two, coupling disturbances should be considered in the manipulation process. Re­ searches were focused on vision based dexterous manipulation and vehicle-manipulator coordinated motions (Giuseppe Casalino et al., 2014).

Fig. 6. The target localization process (Giacomo Marani et al. 2014).

recognition and location for the distance of 2–40 m; video cameras are used to perform actual operation task within the manipulator workspace. Autonomous acoustic sensors play great roles in target searching, recognition (Donghwa Lee et al., 2012) and tracking (Szymon Krupinski et al., 2015), visual recognition, localization, tracking, servo (Junaed Sattar et al., 2009) and control are still the foundation for the under­ water robot manipulation (Peter, 2013). Different from land image processing, great areas of complicate background such as sea water, silt, aquatic plant, rock, marine life in the image of natural aquiculture area, together with lower brightness and current, give rise to great difficulties in visual perception (Jeet Banerjee et al., 2016). On the object recognition of underwater robot, the automatic recognition approaches have been applied by using the image features recognition algorithm (Armagan Elibol et al., 2014), generic segmen­ tation process (J. C. García et al., 2010), colour and shape based iden­ tification algorithm (Feihu Sun et al., 2014). But for the sea organism the shape and colour features are very genetic to their environment (Antoni Burguera et al., 2016). In order to accurately recognize objects under complex environment, deep learning algorithms have emerged and developed since 2010. Currently, the deep learning algorithms can be classified into the end-to-end algorithms and the region-proposal-based algorithms. The end-to-end algorithms, which including YOLO (You Only Look Once) algorithms (Redmon, J. et al., 2016) and SSD (Single Shot Multibox Detector) algorithms (Liu, W. et al., 2016) are advantaged in processing speed. The processing speed can reach 45 fps (frames per second). On the other hand, the region-proposal-based algorithm com­ bines the region proposal method with Convolutional Neural Networks (CNNs). These algorithms, such as Faster Region-based Convolutional Networks (Faster R–CNN) (Ren, S. et al., 2017), and Region-based Fully

2.2. Autonomous environmental perception for underwater operation Since the visual range of underwater environment is very limited, researchers of project SAUVIM proposed 3 stages for the target locali­ zation and manipulation (Fig. 6): the 375 kHz image scan sonars are employed for object searching and directing for the distance of more than 30 m; the DIDSON sonar is applied for the target accurate 3

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx

Fig. 7. Visual perception for underwater operation.

Convolutional Networks (R–FCN) (Dai, J.et al., 2016), can process more accurately. The operation target was successfully localized through polygon and ellipse window (Mario Prats et al., 2011) based 3D visual matching and template-based tracking (M. Prats et al., 2012) (see Fig. 7). Huang and Sheng et al. reduced the calibration error by adjusting the placement and posture of the calibration board, improved the characteristic describing points of the images by using Daisy operator for the SIFT (Scale Invariant Feature Transform) algorithm, consequently increased the servo grasp successful rate (Huang et al., 2016). Dario Lodi Rizzini et al. established an integrated stereo vision system for pipe manipulation tasks (Dario Lodi Rizzini et al., 2017). Through calibration, contour and colour in­ formation based detection, pose estimation; pipe grasp was realized in the water pool. However, underwater target localization and recognition strategies are sometimes not very effective due to the dim environment. Laser ~ alver et al., 2015) and light illumination are recom­ marker (A. Pen mended to be used for target detection and pose estimation (P. J. Sanz et al., 2013) for the successful grasp (Francisco Bonin-Font et al., 2015a, b).

Fig. 8. Three fingered hand (left) and force sensor (right) of GIRONA 500 (G. Palli et al., 2015).

and height task to guarantee manipulation in sight (Ninad Manerikar et al., 2015). Moreover, coordination control and manipulation method of dual arm underwater vehicle manipulator system (DAUVMS) has been considered in the simulations (Hamed Farivarnejad et al. 2014). In compare with UVMS, DAUVMS is a more redundant system and ex­ pected to maintain body stability, realize cooperative manipulation and regulate task priorities for underwater operations (E. Simetti and Casalino, 2015). Task priority in references has involved trajectory plan in the coor­ dinated control. In fact, the coordinated motion trajectories of vehicle and manipulator should be optimized so that the UVMS can preserve enough control forces against the current disturbance in the field operations.

2.3. Underwater vehicle-manipulator system modelling and coordinated control literature Since the oceanic organism capturing robot will be applied by fish­ erman, its size and cost should be designed for the fishing boat. The disturbances of manipulator on the vehicle during coordinating motion are composed of restoring moments and coupling forces. The changes of buoyancy and gravity centre during manipulation process lead to restoring moments (Giacomo Marani et al., 2010), while the coupling forces are caused by the coupling motions between vehicle and manip­ ulator. The modelling and coupling control of Underwater Vehicle Manipulator System (UVMS) are important for precise manipulation. The methodologies for coordination control include performance index based optimization (Ismail, Z.H. et al., 2009), model based observation and control (Santhakumar, M. 2011), task priority control et al. (Gian­ luca Antonelli et al. 2003). Jonghui Han and Wan Kyun Chung et al. (2011) established an in­ ertial coordinated system on the basis of UVMS modelling. A perfor­ mance index was proposed to selectively optimize the restoring forces, so that the control input can compensate the restoring forces and mo­ ments (Jonghui Han et al. 2014). Huang and Tang et al. systematically analysed dynamic modelling of the UVMS on the basis of Newton-Euler dynamics, an S surface based adaptive control was proposed for the coordinated control (Hai Huang et al., 2017). Kyushu Institute of Technology made robust control for UVMS coordination on the basis of dynamic model (Shinichi Sagara et al., 2010). Based on task priority framework, Enrico Simetti et al. applied redundancy DOF to reduce restoring moments during the manipulation of GIRONA 500 (Enrico Simetti et al., 2014). They further applied task priority in the free floating manipulation control by taking higher precision as higher pri­ ority criterion, lower priority for camera occlusion, centring distance

2.4. Robotic hand for uninjured grasp The robotic uninjured grasp has attracted great attentions world widely due to great prospective of surgical operation and underwater rescue. Traditionally, gentle grasp is realized through feedback control of MEMs based multi-dimensional force sensor on the tip of rigid finger. Such hands include NASA hand (Diftler M A et al., 2003), Gifu hand, €ck et al., 2007), HIT/DLR hand (H. Liu a DLR hand (Thomas Wimbo et al., 2007). From these researches, multi-dimensional force sensors based multi-DOF underwater hand have been developed, such as the three finger hand and multi-dimensional force sensor of GIRONA 500 (G. Palli et al., 2015) (see Fig. 8), a 3 fingers 9-DOF dexterous hand with sealed tip force sensor of Harbin Engineering University (Wang, 2006), a 3 fingers 6-DOF dexterous hand with sealed tip force sensor by Liang and Zhang et al. (2010). However, the sealed structure for multi-dimensional force sensors is complicated. The working depth and sensor accuracy are in conflict. In the consideration with hand weight and decoupled accu­ racy (Ronghuai Qi et al., 2014), the maximum working depth is usually about less than 100 m depth. In the last decade, soft and continuum robot has attracted great at­ tentions of researchers (Yap et al., 2015). Air muscles have been applied for gentle grasp due to its low weight and inherently compliant behav­ iour, such as Shadow hand (Robert Bogue et al. 2012), ZJUT-Hand (Wang et al., 2014). On the basis of air muscle concept, the AMADEUS 4

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx

3. Autonomous capture process analysis for underwater robot 3.1. Mechanics design and electronic control system In order to realize autonomous robotic capture, two major methods have been applied including robotic absorption capture and robotic grasp capture. The robotic formality is often designed with open framed structure (see Fig. 9), in order to install sensors, sea organism container, pump or manipulator. The robot is usually less than 1 m � 0.8 m � 0.8 m at overall dimensions and weight less than 100 kg to adapt for fishing boat deployment and recovery. The depth rating is usually less than 100 m for offshore capture. The robot is usually equipped with flotation foam at upper layer, 2 vertical thrusters, and 4 horizontal thrusters. The sensors including a depth meter, altimeter, magnetic compass and cameras. The electronic control architecture in the robot capsule including a surface computer and embedded PC104 (see Fig. 10), the former is responsible with environmental perception, while the latter is respon­ sible with the planning and control of capture motions. The power of the robot is supplied by a DC source through an umbilical cable located between the surface and the robot. The robot could be equipped with a magnetic compass and Doppler Velocity Log (DVL) speed sensor for position estimation. Moreover, DVL can provide height information which can provide necessary information for target capture. Although the autonomous capture process is different between absorption and grasp types, the environmental perception guidance process from un­ derwater sonar and cameras is similar.

Fig. 9. Examples of absorption (left) and grasp (right) type underwater robot.

underwater hand was designed with three parallel arranged bellows. The hand bend and extension are realized through pressure change in each bellow (David M. Lane et al., 1999). However, air muscles still have some disadvantages such as slow response speed, expensive to model, etc (G. Chena et al., 2008). Nicholas Farrow and Correll (2015) and Mohammadreza Memarian et al. (2015) have made researches on soft pneumatic continuum fingers. By using elastic polymer as soft finger material, its bending and extension are realized by inflating into the air chamber, thus it can grasp quickly. The model of the finger can be fabricated through 3D printers at low cost. Soft pneumatic continuum fingers may possess more prospective for uninjured submarine organism grasp at low cost, but their actuation requires the number of air pipes correspondence with the number of DOFs, which not only greatly effects both manipulation motions and working depth.

Fig. 10. Electronic control architecture of the robot. 5

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx

3.3. Task priority redundancy resolution approach for grasp capture motion planning During the manipulation process, restoring forces are important interaction forces resulted from the change of gravity centre and buoyancy centre. According to the ref. (Hai Huang et al., 2017), restoring forces can be expressed as: ! 3 2 n � X v v v v v i v i f þ f þ f þ f g B g B 7 6 7 6 i¼1 7 (1) GðqÞ ¼ 6 ! 7 6 n � 4v v v v v v 0 v X v i v i v i v i 5 rg � f g þ rB � f B þ rg � f g þ rB � f B

Fig. 11. The organism localization process.

3.2. Target recognition and capture architecture

i¼1

where v f ig ¼ n RTv n f ig and v f iB ¼ n RTv n f iB represents the restoring force of the

In the consideration with the limited submarine visual range, the environmental and organism target perception can be realized through different detective methodology or instruments corresponding to long, medium and short range perception (see Fig.11). In other words, me­ chanical scan sonar can be employed to discover the rock environment of sea cucumber inhabitant in the 5–20 m long rang, cameras can be applied to further ascertain organism inhabitant from the obscure im­ ages in the 2–5 m middle range, precision recognition and distant measurement can be realized through cameras at 1–2 m short range. Illumination and laser marker are recommended for the visual tracking and servo. In order to realize organism recognition, region-based fully con­ volutional networks (R–FCN) has been applied to detect the organism (Huang et al., 2019). The offshore aquatic recognition samples are provided from Zhangzidao Group Corporation and Da Lian Institute of Technology. k2(Cþ1) layers position sensitive score maps are generated by the feature maps through convolutional computation, in order to classify regional of interests (ROIs). Through RoIs pooling, k2 position sensitive score generation, R–FCN training, the recognition can be realized. After target recognition and localization, the autonomous absorption process in Fig. 12 can be realized through the following control with the target position and angle in the camera relative to the pipe. On the other hand, target grasp process will be more complicated because it should include motion planning manipulation control to realize the vehicle and multiple DOFs manipulator coordination.

ith link relative to underwater vehicle, n v fg

v v fg

¼½0

0

mv g �T ; n f vB ¼ ½ 0

0

B v �T

(2)

¼ n RTv n f vg , v f vB ¼ n RTv n f vB denotes the restoring force of underwater

vehicle, v rvg and v rvB are the underwater vehicle CB and CG positions

vectors relative to underwater vehicle geometric centre respectively, v rig

and v riB are the ith link CB and CG positions vectors relative to the un­ derwater vehicle geometric centre respectively. For the redundancy DOF system of UVMS, the redundancy resolution method can determine the UVMS’s position, velocity, and acceleration during the task. UVMS trajectory planning is one of the most important difference between absorption capture and grasp capture. The redun­ dancy resolution method is an important part of trajectory planning. Widely used redundancy resolution methods, such as least squares so­ lution method (Chan T et al., 1993), gradient projection method (GPM) (Shen Y. et al., 2005), task-priority method can not fully satisfy the actual needs of UVMS’s path planning. Thus, a task-priority redundancy resolution (Antonelli G. et al., 1998) on acceleration level, which is capable of smoothing velocities and optimizing restoring moments, is proposed in this section. In order to realize continuance and stable organism capture motion, the task priority based approach has been developed to optimize restoring moments. The task priority kinematics equation can be expressed as:

Fig. 12. The organism capture process frame. 6

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx

Fig. 13. History of velocity-level inertia weight solution.

x_ p ¼ Jp q_

(3)

GðqÞ is the restoring moment, Wu is a positive defining weight matrix. The negative gradient of the function pðqÞ is the reduction direction of the restoring torque in the system. The gradient is expressed as:

where vector x_ p is the priority task vector includes the end-effector ve­ locity x_ e , vehicle displacement and rotation velocity vector, Jp is the Jacobian matrix corresponds to xp . By taking the derivative of Eq. (3) with respect to time, the kinematic equation in acceleration-level can be expressed as: x€p ¼ Jp q€ þ J_ p q_

rpðqÞ ¼

q

¼ GðqÞT Wu

∂GðqÞ q

(7)

By taking the gradient rpðqÞ as arbitrary vector

(4)

φ €¼

By using gradient projection method, the acceleration-level inverse kinematic of Eq. (1) can be expressed as: � � € (5) q€ ¼ JypW ~xp þ I JypW Jp φ

κrpðqÞT

(8)

where κ is gain coefficient. Due to the existence of numerical floating, ~p should be written as the close loop form. the acceleration variable x � €¼~ (9) X xp þ kv x_ des x_ p

where JypW ¼ W 1 JTp ðJp W 1 JTp Þ , φ € is a vector of arbitrary value

where xdes is the desired value of task velocity and kv is a constant. Therefore the acceleration-level task priority redundancy resolution approach for restoring moments optimization is represented in the following form: � � T € q€ ¼ JypW X I JypW Jp κrpðqÞ (10)

ðI JypW Jp Þ is the null space factor of JpW , and W is the weight coefficient _ matrix, ~ xp ¼ x€p J_ p q.

In order to reduce the influence of restoring moment, a positive definite scalar potential function is defined as: pðqÞ ¼ GðqÞT Wu GðqÞ

∂pðqÞ

(6) 7

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx

Fig. 14. History of acceleration-level restoring moment optimized solution.

ensure the end effector to reach its desired position as we see in Fig. 13 (a) and Fig. 13 (d). It is worth noting that in Fig. 13 (b) and Fig.13 (d), the rotation velocity and acceleration are maintained at a very small value. Meanwhile, as shown in Fig. 13 (g), we can visually find that pitch angle and roll angle of the vehicle remain stable. The effectiveness of the task-priority approach is verified in this case. However, there are obvious sudden changes in the vehicle and manipulator’s acceleration, which is very unfavourable for the control system. This problem can be solved by solving the redundancy in the acceleration-level. In case of restoring moment optimized approach, its simulation re­ sults are shown in Fig. 14. Seen from Fig. 14 (g) and Fig. 14 (g) intui­ tively, in case of restoring moment optimized approach, the vehicle movement is significantly decreased, as well as in the range of motion of the manipulator’s movement. It can be seen that, restoring moments on the joints, vehicle yaw angle, and vehicle pitch angle reflect the same trend, and there is no significant difference in the numerical value. It is worth noting that the maximum amplitude of the pitch angle. In the case of acceleration-level task priority redundancy resolution approach, moments on the vehicle roll angle decreases constantly, and finally tends to 0. Obviously, the results obtained by the proposed approach reduced the effect of restoring moments. In the comparison with Fig. 14 (a) with

4. Simulation and experiments 4.1. Simulations on task priority redundancy resolution motion planning approach Two numerical cases are carried out to verify the validity of the motion planning approach in this section. The first one uses inertia weight resolution on velocity level whose weight matrix W is chosen as inertia matrix M. The second example uses the restoring moment opti­ mized resolution on the acceleration level. In contrast to the former case, the effectiveness of the approach can be verified. The moving trail of velocity-level inertia weight approach is shown in Fig. 13 (g). The advantage of this approach is that the minimum instantaneous kinetic energy is guaranteed during the travel by using inertia matrix M. Because the vehicle mass is much greater than ma­ nipulator’s, range of movement of vehicle is strongly restricted. We can find that the desired trajectory is almost completed by the movement of manipulator and the acceleration of the vehicle is almost 0 in the first 4 s. Then, the desired trajectory surpasses the range of motion of the manipulator gradually. The vehicle must take its own displacement to 8

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx

Fig. 15. Shellfishes autonomous absorb process.

Fig. 16. Shellfish autonomous grasp process.

Fig. 14 (a), the amplitude of the motion is further reduced. This is because that the reduction of the restoring moments decreases the work which is used to overcome the restoring moments, so that it can accomplish the scheduled task while lower velocity is reached.

relative position and angle to the pipe, the robot would autonomously control the motion in order that the pipe could close and cover the target, when the target was covered by the pipe, the robot will absorb the shell fish through the absorptive pipe (see Fig. 15). In Fig. 16, the target grasp experiment was made in a tank of 50 m in length, 30 m in width and 10 m in depth. The robot mechanics can be seen in Fig. 10, it is an open frame underwater vehicle with a three DOFs manipulator. Each joint with its own angular feedback is correspondent to one DOF. Similar with absorption process at first, the robot would at first search and detect the sea organism target. At the time the target was recognized the robot will measure the distance between target and the robot through binocular vision, and plan the manipulation trajectory in the target coordinate. During the manipulation process, the robot at first recognized the shellfish target at the height of 0.7 m. And then the tra­ jectory, velocity and acceleration are continuously planed on the basis of distance feedback and vehicle positions with the method of 3.3, until the UVMS could dive down and captured the target (see Fig. 16).

4.2. Experiments on target capture methods On the basis of the mechanical design of an absorptive type under­ water robot (see the left of Fig. 10), the target autonomous absorptive experiment has been made in Zhangzidao Island in China. At first, the robot searched the sea organism target. During searching process, the visual detection process was undergoing. Then the shell fish target has been recognized through the method of region-based fully convolutional networks. Afterward, the target is locked and the location of the target would be located in the camera relative to the pipe. The autonomous control method would coordinate with target following process in the pictures of Fig. 15. Through the feedback information of the target 9

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx

5. Discussions and conclusion remarks

Barb~ alat~ a, Corina, Dunnigan, Matthew W., P� etillot, Yvan, 2014. Dynamic coupling and control issues for a lightweight underwater vehicle manipulator system. In: IEEE Oceans 2014, St. John’s NL, Canada, Sept 14-19. Birk, Andreas, Tobias, Doernbach, Müller, Christian Atanas, Luczynski, Tomasz, Gomez Chavez, Arturo, K€ ohntopp, Daniel, Kupcsik, Andras, Calinon, Sylvain, 2018. Dexterous underwater manipulation from onshore locations. IEEE Robot. Autom. Mag. 12, 1–11. Bogue, Robert, 2012. Artificial muscles and soft gripping: a review of technologies and applications. Ind. Robot 39 (No.6), 535–540. Bonin-Font, Francisco, Oliver, Gabriel, Wirth, Stephan, et al., 2015. Visual sensing for autonomous underwater exploration and intervention tasks. Ocean Eng. 93 (No.1), 25–44. Bonin-Font, Francisco, GabrielOliver, StephanWirth, Massot, Miquel, LluisNegre, Pep, Joan-PauBeltran, 2015. Visual sensing for autonomous underwater exploration and intervention tasks. Ocean Eng. 93, 25–44. Burguera, Antoni, Bonin-Font, Francisco, Lisani, Jose Luis, Petro, Ana Belen, Oliver, Gabriel, 2016. Towards automatic visual sea grass detection in underwater areas of ecological interest. In: The 21st IEEE International Conference on Emerging Technologies and Factory Automation, Berlin, Germany, pp. 1–4. Casalino, G., Angeletti, D., Bozzo, T., Marani, G., 2001. Dexterous underwater object manipulation via multi-robot cooperating systems. Proc. IEEE Int. Conf. Robot. Autom. (No.4), 3220–3225. Casalino, Giuseppe, Caccia, Massimo, Caiti, Andrea, Antonelli, Gianluca, Indiver, Giovanni, Melchiorri, Claudio, Caselli, Stefano, 2014. MARIS: a national project on marine robotics for interventions. In: 2014 22nd Mediterranean Conference on Control and Automation, University of Palermo. June 16-19, Palermo, Italy, pp. 864–869. Chan, T., Dubey, R., 1993. A weighted least-norm solution based scheme for avoiding joint limits for redundant manipulators. In: 1993 IEEE International Conference on Robotics and Automation, Volume 3. Atlanta, USA, pp. 395–402. Chena, G., Phamb, M.T., Redarce, T., 2008. Sensor-based guidance control of a continuum robot for a semi-autonomous Colonoscopy. Robot. Auton. Syst. 57, 712–722. Configurations. https://auvac.org/configurations/view/119?from_search¼1. Dai, J., Li, L., He, K., Sun, J., 2016. R-FCN: object detection via region-based fully convolutional networks. In: 30th Conference on Neural Information Processing System, Barcelona, Spain. Diftler, M.A., Platt, R., Culbert, C.J., et al., 2003. Evolution of the NASA/DARPA robonaut control system. In: Proceedings of the 2003 IEEE International Conference on Robotics and Automation (ICRA). IEEE Press, Taipei, Taiwan, China, pp. 2543–2548. Elibol, Armagan, Kim, Jinwhan, Gracias, Nuno, Garcia, Rafael, 2014. Efficient image mosaicing for multi-robot visual underwater mapping. Pattern Recognit. Lett. 46, 20–26. Evans, J., Keller, K., Smith, J., Marty, P., Rigaud, O., 2001. Docking techniques and evaluation trials of the SWIMMER AUV: an autonomous deployment AUV for workclass ROVs. In: Proceedings of International Conference and Exhibition OCEANS, 2001, vol. 1. MTS/IEEE, pp. 520–528. Farivarnejad, Hamed, Moosavian, S. Ali A., 2014. Multiple Impedance Control for object manipulation by a dual arm underwater vehicle-manipulator system. Ocean Eng. 89, 82–98. Farrow, Nicholas, Correll, Nikolaus, 2015. A soft pneumatic actuator that can sense grasp and touch. In: IEEE/RSJ International Conference on Intelligent Robots and Systems Congress Center Hamburg, Sept.28-Oct.2, pp. 2317–2323. García, J.C., Fern� andez, J.J., Sanz, P.J., Marín, R., 2010. Increasing autonomy within underwater intervention scenarios: the user interface approach. In: 2010 IEEE International Systems Conference. San Diego, United states, April, pp. 71–75. Han, Jonghui, Chung, Wan Kyun, 2014. Active use of restoring moments for motion control of an underwater vehicle-manipulator system. IEEE J. Ocean. Eng. 9 (No.1), 100–109. Han, Jonghui, Park, Jonghoon, Chung, WanKyun, 2011. Robust coordinated motion control of an underwater vehicle-manipulator system with minimizing restoring moments. Ocean Eng. 38, 1197–1206. Hu, Zhen, Wang, Lei, Cong, Ye, He, Zaiming, 2016. Research of acoustic survey devices applied on Deepsea vehicles. In: IEEE/OES China Ocean Acoustics Symposium, Harbin, China, January 9-11, pp. 1–5. Huang, Hai, Zhou, Hao, Qin, Hong-de, Ming-wei, Sheng, 2016. Underwater vehicle visual servo and target grasp control. In: 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qing Dao, China, Dec 3-7. Huang, Hai, Tang, Qirong, Li, Hongwei, Le Liang, Li, Weipo, Pang, Yongjie, 2017. Vehicle-manipulator system dynamic modeling and control for underwater autonomous manipulation. Multibody Syst. Dyn. 41 (No.10), 125–147. Huang, Hai, Zhou, Hao, Xu, Yang, Zhang, Lu, 2019. Faster R-CNN for marine organisms detection and recognition using data augmentation. Neurocomputing. https://doi. org/10.1016/neucom.2019.01.084. Ismail, Z.H., Dunnigan, M.W., 2009. Redundancy resolution for underwater vehiclemanipulator systems with congruent gravity and buoyancy loading optimization. In: Proceedings of International Conference on Robotics & Biomimetics, Guilin, China, Dec 19-23, pp. 1393–1399. Junaed Sattar, Gigu� ere, Philippe, Dudek, Gregory, 2009. Sensor-based behavior control for an autonomous underwater vehicle. Int. J. Robot. Res. 28 (No.6), 701–713. June. Khatib, Oussama, Yeh, Xiyang, Brantner, Gerald, Brian Soe, Kim, Boyeon, Ganguly, Shameek, Stuar, Hannah, et al., 2016. Ocean One A Robotic Avatar for Oceanic Discovery. IEEE Robotics & Automation Magzine, Dec, pp. 20–29. kongsberg. https://www.kongsberg.com/maritime/products/marine-robotics/autono mous-underwater-vehicles/AUV-remus-100/.

Offshore organism robotic capture requires robots to realize gentle and agile grasp through autonomous environmental perception, vehicle manipulator coordinated control and uninjured grasp. Although re­ searchers have developed some effective methodologies on these areas, difficulties and challenges still remain. At first, limited visual range of the offshore causes target unseen in the long range, obscure in the medium range, with complicated back­ ground, low contrast and geometric deformation of the image. Secondly, during target capturing process, the vehicles will confront with the disturbance from oceanic current, coupling forces and restoring forces from manipulators. Not only the trajectory plan for UVMS coordinated motion is necessary, but the anti-disturbance and precise control are important for the vehicle. Thirdly, this article has made analysis and comparison between robotic autonomous absorption capture and grasp capture. In the visual searching and detection process, these two methods share the same procedure. For the capture motion and control, the absorption capture process is simpler, while the grasp capture pro­ cess is more complicated, but does less harm to the seabed if great quantities will be applied in the future. On the basis of literature comparison and analysis, this article has proposed the control architecture for the remotely autonomous robot with organism absorb or grasp functions. The proposed accelerationlevel task priority redundancy resolution approach can optimize restoring moments and generate grasp trajectory for motion planning. The model based adaptive control approach can coordinate the motions between vehicle and manipulator for target capture. Experiments of the organism autonomous grasp and absorption have both been proposed. It will further improve the robot autonomy and the efficiency for organism capture. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements This project is supported by National Natural Science Foundation of China of China (No. 61633009, 51579053, 5129050, 51639004, 61603277, 51779052, 51779059), the Community Technology (No.41412050101) and Field Fund (No.61403120409) of the 13th FiveYear Plan for Equipment Pre-research Fund, and also funded by the Key Basic Research Project of “Shanghai Science and Technology Innovation Plan” (No.15JC1403300), Plan Projects of Thousand Youth Talents (No.1000231901), Aerospace Science and Technology Innovation Fund (SAST201607). All these supports are highly appreciated. The authors also would like to thank Dalian University of Technology and Zhangzidao Group Corporation, their support in the contest and field trials made this study possible. References Wimb€ ock, Thomas, Ott, Christian, Hirzinger, Gerd, 2007. Impedance Behaviors for TwoHanded Manipulation: Design and Experiments, 2007 IEEE International Conference on Robotics and Automation (ICRA), pp. 4182–4187. Antonelli, Gianluca, 2014. Underwater Robots. Springer, Berlin, Germany. Antonelli, G., Chiaverini, S., 1998. Task-priority redundancy resolution for underwater vehicle-manipulator systems. In: 1998 IEEE International Conference on Robotics and Automation. Vol 1. Leuven, Belgium, pp. 768–773. Antonelli, Gianluca, Chiaverini, Stefano, 2003. Fuzzy redundancy resolution and motion coordination for underwater vehicle-manipulator systems. IEEE Trans. Fuzzy Syst. 11 (No.1), 109–110. Banerjee, Jeet, Ray, Ranjit, Vadali, Siva Ram Krishna, Nath Shome, Sankar, Nandy, Sambhunath, 2016. Real-time underwater image enhancement: an improved approach for imaging with AUV-150. Sadhan 41 (No.2), 225–238. February.

10

H. Huang et al.

Ocean Engineering xxx (xxxx) xxx with error constraints. Ocean Eng. https://doi.org/10.1016/j.oceaneng. 2019.106341. Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only Look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Veges, NV, USA, 27-30 June, pp. 779–788. Ren, S., He, K., Girshick, R., Sun, J., 2017. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149. Ribas, D., Ridao, P., Turetta, A., Melchiorri, C., Palli, G., Fernandez, J., Sanz, P., 2015. IAUV mechatronics integration for the TRIDENT FP7 project. IEEE ASME Trans. Mechatron. (99), 1–10. https://doi.org/10.1109/TMECH.2015.2395413. Ridao, Pere, Carreras, Marc, Ribas, David, Sanz, Pedro J., Oliver, Gabriel, 2015. Intervention AUVs: the next challenge. Annu. Rev. Contr. 40, 227–241. Rigaud, V., Coste-Maniere, E., Aldon, M., Probert, P., Perrier, M., Rives, P., Chantler, M., 1998. UNION: underwater intelligent operation and navigation. IEEE Robot. Autom. Mag. 5 (No.1), 25–35. Rizzini, Dario Lodi, Kallasi, Fabjan, Aleotti, Jacopo, Oleari, Fabio, Caselli, Stefano, 2017. Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks. . Sagara, Shinichi, Yatoh, Takashi, Shimozawa, Tomoaki, 2010. Digital RAC with a disturbance observer for underwater vehicle-manipulator. Artif. Life Robot. 15, 270–274. Santhakumar, M., 2011. Proportional-derivative observer-based backstepping control for an underwater manipulator. Math. Probl. Eng. No. 4, 1–18. Sanz, P.J., Pe� nalver, A., Sales, J., et al., 2013. GRASPER: a multisensory based manipulation system for underwater operations. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics, pp. 4036–4041. Shen, Y., Huper, K., 2005. Optimal trajectory planning of manipulators subject. In: 12th International Conference on Advanced Robotics, 2005. Seattle, USA, pp. 9–16. Siikavuopio, Sten Ivar, 2010. Mini Submarine Harvests Sea Urchins. https://nofima.no/ en/nyhet/2010/02/mini-submarine-harvests-sea-urchins/. Simetti, E., Casalino, G., 2015. Whole body control of a dual arm underwater vehicle manipulator system. Annu. Rev. Contr. 40, 191–200. Simetti, Enrico, Giuseppe Casalino, Torelli, Sandro, Sperinde, Alessandro, Turetta, Alessio, 2014. Floating underwater manipulation: developed control methodology and experimental validation within the TRIDENT project. J. Field Robot. 31 (No.3), 364–385. Sun, Feihu, Yu, Junzhi, Chen, Shifeng, De Xu, 2014. Active visual tracking of freeswimming robotic fish based on automatic recognition. In: Proceeding of the 11th World Congress on Intelligent Control and Automation Shenyang, China, June 29July 4, pp. 2879–2884. videoray. https://www.videoray.com/. Wang, Hua, 2006. Underwater Dexterous Hand with Force Sense. A Dissertation for Degree of PH.D. Harbin Engineering University (In Chinese). Wang, Zhiheng, Bao, Guanjun, Du, Mingyu, Yang, Qinghua, 2014. Grasping model identification for ZJUT hand. 13th international conference on control, automation. In: Robotics & Vision Marina Bay Sands, Singapore, Dec10-12th, pp. 1159–1166. Yap, Hong Kai, Lim, Jeong Hoon, Nasrallah, Fatima, Goh, James C.H., Yeow, Raye C.H., 2015. A soft exoskeleton for hand assistive and rehabilitation application using pneumatic actuators with variable stiffness. In: 2015 IEEE International Conference on Robotics and Automation, Washington State Convention Center, Seattle, Washington, May 26-30, pp. 4967–4972. Zhang, Tie-dong, Zeng, Wen-jing, Wan, Lei, Qin, Zai-bai, 2012. Vision-based system of AUV for an underwater pipeline tracker. China Ocean Eng. 26 (No.3), 547–554.

Krause, Geoff, 2014. Trip Report for the November 2014 Marketing Mission to China. http://www.pscha.org. Krupinski, Szymon, Desouche, Remi, Palomeras, Narcis, Allibert, Guillaume, MinhDuc, Hua, 2015. Pool testing of AUV visual servoing for autonomous inspection. IFAC-Papers On Line 48 (No.2), 274–280. Lane, David M., Davies, J. Bruce C., Robinson, Graham, O’Brien, Desmond J., 1999. The AMADEUS dextrous subsea hand: design, modeling, and sensor processing. IEEE J. Ocean. Eng. 24 (No.1), 96–111. Lee, Donghwa, Kim, Gonyop, Kim, Donghoon, Myung, Hyun, Choi, Hyun-Taek, 2012. Vision-based object detection and tracking for autonomous navigation of underwater robots. Ocean Eng. 48, 59–68. Liang, Qiaokang, Zhang, Dan, Member, Senior, Song, Quanjun, Ge, Yunjian, 2010. A potential 4-D fingertip force sensor for an underwater robot manipulator. IEEE J. Ocean. Eng. 35 (No.3), 574–583. Liu a, H., Meusel a, P., Seitz a, N., Willberg a, B., et al., 2007. The modular multisensory DLR-HIT-Hand. Mech. Mach. Theory 42, 612–625. Manerikar, Ninad, Giuseppe Casalino, Simetti, Enrico, Torelli, Sandro, Sperind� e, Alessandro, 2015. On autonomous cooperative underwater floating manipulation systems. In: 2015 IEEE International Conference on Robotics and Automation Washington State Convention Center Seattle, Washington, May 26-30, pp. 523–528. Marani, Giacomo, Yuh, Junku, 2014. Introduction to Autonomous Manipulation. Springer. Marani, Giacomo, Choi, SongK., Yuh, Junku, 2009. Underwater autonomous manipulation for intervention missions AUVs. Ocean Eng. 36 (No.8), 15–23. Marani, Giacomo, Choi, Song K., Yuh, Junku, 2010. Real-time center of buoyancy identification for optimal hovering in autonomous underwater intervention. Intel. Serv. Robot. (No.3), 175–182. Memarian, Mohammadreza, Gorbet, Rob, Dana, Kuli�c, 2015. Control of soft pneumatic finger-like actuators for affective motion generation. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems Congress Center Hamburg, Sept.28-Oct.2, pp. 1691–1696. Palli, G., Moriello, L., Melchiorri, C., 2015. Performance and sealing material evaluation in 6-axis force-torque sensors for underwater robotics. Int. Feder. Automatic Control Pap. On Line 48 (No.2), 177–182. Pe~ nalver, A., P�erez, J., Fern� andez, J.J., Sales, J., Sanz, P.J., García, J.C., Fornas, D., Marín, R., 2015. Visually-guided manipulation techniques for robotic autonomous underwater panel interventions. Annu. Rev. Contr. 40, 201–211. Peter, Corke, 2013. Robotics Vision and Control, second ed. Springer. Prats, Mario, Garcia, Juan Carlos, Javier Fernandez, Jose, et al., 2011. Advances in the specification and execution of underwater autonomous manipulation tasks. In: OCEANS 2011 IEEE, Santander, Spain, June, pp. 1–5. Prats, Mario, Ribas, David, Palomeras, Narcís, et al., 2012. Reconfigurable AUV for intervention missions: a case study on underwater object recovery. Intel. Serv. Robot. (No.5), 19–31. Prats, M., García, J.C., Wirth, S., et al., 2012. Multipurpose autonomous underwater intervention: a systems integration perspective. In: 2012 20th Mediterranean Conference on Control & Automation (MED) Barcelona, Spain, July 3-6, pp. 1379–1384. Qi, Ronghuai, Lam, Tin Lun, Xu, Yangsheng, 2014. Design and implementation of a lowcost and lightweight inflatable robot finger. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, September 14-18, pp. 28–33. Qin, Hongde, Chen, Hui, Sun, Yanchao, Chen, Liangliang, 2019. Distributed finite-time fault-tolerant containment control for multiple ocean Bottom Flying node systems

11