GARBI: a low-cost underwater vehicle

GARBI: a low-cost underwater vehicle

Microprocessors and Microsystems 23 (1999) 61–67 GARBI: a low-cost underwater vehicle J. Amat a,*, A. Monferrer b, J. Batlle c, X. Cufı´c a IRl (lns...

447KB Sizes 0 Downloads 4 Views

Microprocessors and Microsystems 23 (1999) 61–67

GARBI: a low-cost underwater vehicle J. Amat a,*, A. Monferrer b, J. Batlle c, X. Cufı´c a

IRl (lnstitut of Robotics), Ed. Nexus, C/GranCapita` No. 2, 08028 Barcelona, Spain Department of Automatic Control Engineering, Universitat Polite´cnica de Catalunya, C/ Pau Gargallo No. 5, 08028 Barcelona, Spain c Department of Electronical and Electronic Engineering, Institute of Informatics and Applications, Av. Lluı´s Santalo´ s/n, 17003 Girona, Spain b

Abstract The progress on robotics, computer technology, sensors and perception systems has produced great advances in the field of underwater robotics, in the research and development of both autonomous ROVs and AUVs. Unfortunately, this progress has driven to the use of very expensive devices and, consequently, the development of high-cost underwater vehicles. As an alternative approach, the GARBI underwater robot has been designed as a low-cost ROV developed for underwater applications in shallow water environments. This paper describes the underwater vehicle and the endowed systems, the robot’s goals and its main applications. q 1999 Elsevier Science B.V. All rights reserved. Keywords: GARBI; Underwater vehicle; Remote control unit; Stereoscopic vision system

1. Introduction The progress made in the field of underwater vehicles, either autonomous (AUV) or teleoperated (ROV), has been very important in the last years [1,2]. This progress, based mainly on the use of new sensors or improved environment perception systems in these vehicles, has also carried with it an outstanding increase in their cost. This cost can restrict in some cases their applicability, hence, some researchers have been involved in the design and development of low-cost underwater robots [3,4]. Due to cost restrictions and with the aim to provide a biologists’ team with a system able to perform long observations over the flora and some determined undersea fauna, we were entrusted to study and develop a low-cost prototype. This prototype is oriented both to perform undersea observations and to carry out simple manipulation operations such as teleoperated sample collection. Furthermore, it must be able to perform some predefined operations autonomously, such as the dynamic stabilisation of the vehicle in front of an object, to be able to track an object when it moves, or a way-point-following navigation towards a goal position. This autonomous operation is necessary in order to do long observations when it is not possible to have the continuous attention of a human operator. This paper is structured as follows: Section 2 describes the characteristics of the vehicle and Section 3 presents its control system. Then, Section 4 presents the goals and appli* Corresponding author. Tel.: 1 34-934016972; fax: 1 34-934017045. E-mail address: [email protected] (J. Batlle)

cations and finally, Section 5 shows an example of two simple control behaviours, a Go-to-Point behaviour and an obstacle avoidance behaviour.

2. Vehicle characteristics The vehicle developed has been conceived for exploration in shallow waters, up to 200 m, and in order to reduce its cost, the number of degrees of freedom has been reduced to four (Fig. 1), corresponding to the linear movements in the direction of the X, Y and Z-axis and the rotation u around the Z-axis. It has been possible to disregard the two remaining rotation movements around the X- and Y-axes by providing the vehicle with a high vertical stability through the use of the necessary ballast mounted on the base of the vehicle. The movements due to these four degrees of freedom are produced by means of five propellers placed as shown in Fig. 2. With this propellers’ layout it is possible to attain a higher propulsion force in the X- and Z-axes, as it is required in most of the movements. Each propeller has a power of 200 W allowing the vehicle to move at a speed of 3 knots. The vehicle is fitted out with two arms [5] (Fig. 3) and thus, it can, through teleoperation, perform some tasks. The number of degrees of freedom of each arm is reduced to three for simplicity and to attain a low-cost system. With these simple arms some relatively simple tasks, such as sample collection can be performed although this simplicity carries with it some operation difficulties. In order to solve the grasping difficulties produced by the

0141-9331/99/$ - see front matter q 1999 Elsevier Science B.V. All rights reserved. PII: S0141-933 1(99)00013-7

62

J. Amat et al. / Microprocessors and Microsystems 23 (1999) 61–67

Fig. 3. Garbı´ prototype. Fig. 1. Degrees of freedom of Garbı´.

lack of degrees of freedom at the wrists of both arms, the grippers have been arranged with different orientations; one vertical and the other horizontal. This arrangement of grippers forces to use only one of the two arms for grasping while the other is only auxiliar for the task. Which function each arm adopts depends on the layout of the elements to manipulate. As sensors for control navigation the vehicle has a compass and two speed logs as well as an absolute positioning system based on a transponder. In what refers to environment sensing, the vehicle has six ultrasonic sensors distributed around its surface to detect obstacles in both directions for the three axes. In addition, it is endowed with a stereoscopic vision system. The aim of the vision system is twofold. First, it is used to obtain the images of the environment for teleoperation. Second, the 3D data extracted is utilised for the position’s local control, to stabilise dynamically the vehicle in front of an object to observe.

Fig. 2. Garbı´ propellers layout.

With the aim to reduce as much as possible the response time in the stabilisation local control mode, the vehicle contains a control unit on board-based on four PC-type computers.

3. Control system architecture The vehicle control is done from outside by means of a control station oriented to facilitate the missions

Fig. 4. Control system architecture.

J. Amat et al. / Microprocessors and Microsystems 23 (1999) 61–67

63

Fig. 6. The characteristics points automatically obtained in the window.

3.1. Remote control unit The remote control unit computer is dedicated to the user interface, allowing the programming and the human supervision of the mission. The functions to be performed by this computer are

Fig. 5. Vehicle control structure.

programming and to control the robot arms through the operator gestures, for teleoperation. The control system architecture is based on a set of computers, PC-type, structured in a hierarchical way, as shown in Fig. 4. One of these computers in the control station is devoted to the mission control. It has two main functions. First, to control the vehicle navigation towards a predetermined point with the aid of an acoustic positioning system and a compass. Second, to control the robot arms within the working zone. The vehicle contains four more computers. The first one carries out the vehicle control and supports the communications protocol with the external control unit. This computer receives the information from three differentiated modules, which are:

• programming of the navigation trajectories for the approach to the desired operation point; • to visualise and to control the trajectory; • to acquire the information from the central control unit peripherals. (transponder system and master arms) for operation telecontrol and • generation of graphic and alphanumeric information to be used for tracking and supervising the vehicle status. This computer is a PC, industrial type, and has a screen to visually track the execution of the mission, and thus, the user can program and visualise the state of the mission. A Man–Machine–Interface has been studied and developed with the aim to attain the maximum efficiency/simplicity ratio. This facility is essential to command efficiently a

• stereoscopic vision module; • USS data acquisition and processing module and • navigation data acquisition and conditioning system. Each of these modules is endowed with a computer and their interfaces, as shown in Fig. 5. Thus, each module processes the obtained data providing a more elaborated information to the vehicle control computer. This previous processing reduces the response time of the vehicle control computer assuring a better behaviour of the real time control.

Fig. 7. The position vector obtained. (a) The points and (b) the tree selected points.

64

J. Amat et al. / Microprocessors and Microsystems 23 (1999) 61–67

vehicle that has a large space mobility. The command of such a vehicle forces the operator to control simultaneously a relatively high number of data. The vehicle teleoperated arms actuation is performed by means of a light exoeskeleton which has the same three degrees of freedom per arm and the same geometry than the vehicle arms. To attain an acceptable operativity level and considering the constrains imposed by the low-cost vehicle requirements, the exoeskeleton is passive, which means that there is not any force information transferred back to the user. However, there is still certain sensing return through the image on the screen. This feedback shows graphically the existence of possible collisions with the arms. Anyway, these collisions do not produce any physical damage to the arms due to their robust but also flexible construction. With these characteristics the arms can even absorb slight collisions when the vehicle navigates through obstacles. 3.2. Vehicle control unit The vehicle control module is based on a computer board with a 486 CPU. This module acts as an intelligent interface between, on the one hand, the arms actuators which are DC motors, and the propellers, and on the other hand, the central control unit. The last executes the trajectory control algorithms from the data that the computer in the vehicle sends

to the central control unit through the umbilical cable and also from the vehicle estimated position, obtained by the acoustic absolute positioning system which is based on a responder placed on the vehicle. Therefore, the vehicle control unit executes a trajectory which is generated and corrected at short-time intervals by the central control unit. The local control system is also able to control the propellers in order to dynamically stabilise the vehicle in front of an identifiable object. The stabilisation is done from the relative position between the vehicle and the object, which is supplied by the stereoscopic vision system every 120 ms. In order to get a better sensitivity of the vehicle moving speed relative to the water, in the low speeds range an electrodynamic speed sensor is used. This kind of sensor requires to be recalibrated frequently due to the drifts produced. However, on the contrary, they do not present the drawback of having a dead zone as the blade based sensors do. The drift problems are continuously corrected from the vehicle absolute position. The vehicle absolute position calculation is carried out every 2 s with the transponder system, from the propagation time between the two control points at the water surface and the responder in the vehicle. Using a sound propagation lineal model the attainable precision is of the order of 40 cm [6]. In spite of this low precision the control of the navigation trajectory is facilitated by the high resolution of the speed

Fig. 8. Way-Point-Following behaviour based on the Heading error.

J. Amat et al. / Microprocessors and Microsystems 23 (1999) 61–67

65

Fig. 10. Test of the Object Avoidance Behaviour.

Fig. 9. Obstacle Avoidance principle.

sensors. These sensors are used to estimate the vehicle trajectory with a resolution of the order of several centimetres, even though an accumulative drift is produced which has to be corrected periodically by the control unit absolute positioning system.

from each vertex (angle, contrast and relative position). From these three points a position vector in the 3D space is obtained (Fig. 7). When the operator proceeds to fix the vehicle stabilisation point with respect to the selected scene, this vector is taken as the reference vector. With this vector the position error vector is calculated in 3D.

3.3. The stereoscopic vision system

4. GARBI: goals and applications

The vision system, besides sending images to the surface station has capabilities for local processing. This allows to solve in situ some automatisations; for example following a slow moving target or dynamically stabilising the vehicle, or the image, in front of an object. The vision system in the developed prototype uses two conventional colour TV cameras. This stereoscopic system contains a specialised hardware which allows the fast processing required by the capsule stabilising algorithms. It consists of one processor for each of the stereoscopic cameras; both processors controlled by a PC486. Those processors are two-stage and pipe-lined allowing real time (20 ms) feature extraction. In the first stage the contours are obtained while in the second stage the information extracted are the vertices, which are the features used to parametrise the scene’s selected patterns. Those features are transferred to the PC486, which by characteristic homologue-point pairing obtains a relative three-dimensional (3D) position vector every 100 ms. In order to get this position, the vehicle operator places a window over the zone of the image that is to be observed. The vision system automatically obtains the image characteristic points, within the window. The characteristic points, the vertices, are classified according to their relevance using as the discrimination function the angle of each vertex detected and the contrast between the grey levels1 or colour, at both sides of the environment defined by each vertex. Once the classification of vertices has been carried out, three of them are selected (Fig. 6). The matching of these three singular points with their homologues is done from the characteristics extracted

The underwater vehicle developed is oriented basically for observation and to teleoperate manipulation tasks. The manual control of the vehicle is not done acting directly on the propellers. Such a control would result very awkward to the user although the vehicle has only four degrees of freedom. For this reason, the manual control is based on a joystick that provides speed instructions uniquely on the longitudinal axis X, and on the transversal axis Y, as well as the rotation over the Z-axis. The speed control along the Z-axis is given through an independent linear potentiometer. This interface allows to control the vehicle with certain efficiency and without excessive difficulties for the vehicle guidance and to position it with respect to a previously defined objective. Two speed sensors are placed at each side of the vehicle in order to execute this control. From the data supplied by these sensors we can measure the speed along the X-axis and to evaluate the rotation speed u 0 . The displacements along the Y- and Z-axes are open loop controlled from the previous modelling of the vehicle dynamics. The low cost requirement of the vehicle has forced to use this simplified control. Nevertheless, it has demonstrated to be efficient enough to operate in rest waters, as they are inside harbours, in coast sheltered zones or in dams. On the other side, for observations in not completely static waters, or for long observations in which it is not convenient to keep a constant attention to the vehicle position, the availability of a dynamic stabilisation system opens many new possible vehicle applications. This vision-based

66

J. Amat et al. / Microprocessors and Microsystems 23 (1999) 61–67

Fig. 11. Fuzzy implementation of the obstacle avoidance behaviour.

dynamic stabilisation system, has shown to be very effective in the vehicle positioning in front of objects clearly identifiable with respect to its environment. The navigation of the vehicle towards its objective is done automatically from the absolute positioning acoustic system. This system is precise enough for the vehicle to arrive at the desired point and to visually do the vehicle final positioning. The applicability of the vehicle to carry out long observations, as those required by biologists, has been greatly improved by the image processing system developed. The system detects differences among images and evaluates the variations that can appear in the visualised images. Therefore, the system automatically disconnects the image recorder when the scene does not present any variation, thus simplifying the later analysis of the recorded material corresponding to long observation periods. This vehicle can also in spite of its very low cost be used for other applications in the maintenance field. This performance is due to the attainable operativity of its two arms even with their limited movements capabilities.

5. Description of control behaviours GARBI robot is conceived as a ROV. By means of a human–machine, the user governs the vehicle. Nevertheless, we are interested in endowing the vehicle with a set of autonomous behaviours, mainly for two reasons. First of all, we want to keep the user working-load low enough to allow him to concentrate on the guiding tasks. Second, we plan to build a control architecture in order to transform the robot in a truly AUV. As it is well known three main approaches are used to build a control architecture for an autonomous robot: deliberative, behavioural and hybrid (see Refs. [7–9] for more information). Deliberative architectures are based on the sense-model-plan-act principle. In behavioural architectures a set of autonomous behaviours compete and/or co-operate driving the vehicle to a goal.

Finally, hybrid architectures try to take advantage of the two previous ones, minimising their limitations. Usually, they are structured in three layers: (1) the deliberative one, based on planning, (2) the control execution layer (set on/set off behaviours), and (3) the functional reactive layer. GARBI architecture is intended to be a hybrid. Two main issues are related to the implementation of robot behaviours: how to define them and how to merge them in order to solve potential conflicts. With the aim of using a well-defined theory, we use fuzzy logic for behaviour description. Other studies have been done with the same intention. In the field of mobile robotics, there are some studies [10,11] quite close to our point of view, and, in the field of underwater robotics, there are some architectures like the PRSA [12] which allows the definition of behaviours through fuzzy logic. The main advantages here are as follows: 1. The if–then rules used for behaviour definition are close to human reasoning (simplification of behaviour description). 2. Instead of inhibiting some behaviour by another with higher priority, behaviours are merged through aggregation, and final response is obtained by defuzzification, hence, behaviours’ responses are weighted by fuzzy logic algorithm in order to obtain the final response. In order to show how behaviours are implemented, hereafter we present the implementation of two basic navigation behaviours: The Go-to-Point behaviour and the obstacleavoidance behaviour. The first one is a two-dimensional (2D) simple auto-pilot which drives the vehicle to a goal position minimising the heading error and the second one uses the distance to the object and the hit angle to produce a repulsion force. 5.1. Go-to-Point behaviour The Go-to-Point behaviour drives the robot to a XY goal position. The behaviour tries to reduce the heading error

J. Amat et al. / Microprocessors and Microsystems 23 (1999) 61–67

(see Fig. 8) between the robot orientation and the heading needed for driving the robot to the goal position. Depending on the heading error, five different zones have been defined. When the error is located at the ahead zone the robot must go straight ahead. When the heading error is located at the little left zone the robot must turn a bit to the left, and so on. For each zone, a fuzzy set has been defined, and two rules have been used for defining the response of the starboard and port propellers. Obviously, there exists a fuzzy boundary within these regions. 5.2. Obstacle-avoidance behaviour The obstacle avoidance behaviour is based on two input parameters, the hit angle and the obstacle distance (see Fig. 9). Fig. 11 shows the fuzzy implementation of this behaviour using four rules. The priority of object avoidance rules is higher than the priority of Go-to-Point behaviour, hence, this rule dominates the vehicle when it is near an obstacle. Fig. 10 shows a test experiment where the robot, located initially at point (0,0) avoided an obstacle located at point (10,0) when headed towards the goal point (30,0).

6. Conclusions In this paper we have presented the main characteristics of the GARBI underwater vehicle. Its subsystems, its applications and its main goals have been described. Our work and experience with this low-cost prototype prove its utility in a large number of applications going from biologist observation tasks to simple repair and maintenance tasks. Nevertheless, more effort has to be done in order to transform this ROV in a truly AUV, in particular in respect to mission planning and control execution of these planned tasks.

67

Acknowledgements This work has been done under the European project UNION (ESPRIT-BRA-8972), and with the support of CICYT (Spanish Research Agency). References [1] V. Rigaud, Robotics state of the art versus subsea intervention, Report of the UNION project D1.2, ESPRIT-BRA, 1994. [2] I.D. Bonnon, Rov intervention for effective cable burial-an overview of recent achievements, Oceans 94, vol. 2, Brest, France, 1994, pp. 515–520. [3] J. Amat, et al., Garbı´: The UPC low cost underwater vehicle, Undersea Robotics & Intelligent Control, A joint US/Portugal Workshop, 1995, Lisbon, Portugal, pp. 19–24. [4] S.M. Smith, Modular low-cost AUV systems design, Undersea Robotics & lntelligent Control, A joint US/Portugal Workshop, Lisbon, Portugal, 1995, pp. 153–157. [5] J. Batlle, et al., Telemanipulation arms for underwater vehicles, Seventh International Conference on Advanced Robotics, Sant Feliu de Guı´xols-Spain, 1995, pp. 267–272. [6] A. Catala, J. Clavaguera, ROV positioning by means of neural network algorithm, Oceanology International, 1996, p. 96. [7] W.D. Hall, M.B. Adams, Autonomous vehicle taxonomy, Proceedings of the Symposium on Autonomous Underwater Vechicle Technology, 1992, pp. 49–64. [8] P. Valanavis, D. Gracanin, M. Matijasevic, R. Kolluru, G. Demetriou, Control architectures for autonomous underwater vehicles, IEEE Control Systems (1997) 48–64. [9] P. Ridao, J. Batlle, J. Amat, G.N. Roberts, Recent trends in control Architectures for autonomous underwater vehicles, International Journal of Systems Science, submitted for publication. [10] W. Li, Perception-action behavior control of a mobile robot in uncertain environments using fuzzy logic, International Conference on Intelligent Robots and Systems, 1994, pp. 439–446. [11] J.K. Rosenblatt, C.E. Thorpe, Combining multiple goals in a behavior-based architecture, IEEE/RSJ International Conference on Intelligent Robots and Systems, 1995, pp 136–141. [12] K. Ganesan, S.M. Smith, K. White, T. Flanigan, A pragmatic software architecture for UUVs, Proceedings of the Symposium on Autonomous Underwater Vehicle Technology, 1996, pp. 209–215.