The main primitive operations of a binary search tree
are:
- Add: adds a new node
- Get: retrieves a
specified node
- Remove: removes a node
- Traversal: moves through the
structure
Additional primitives can be defined:
- IsEmpty: reports whether the
tree empty
- IsFull: reports whether the
tree is full
- Initialise: creates/initializes
the tree
- Destroy: deletes the
contents of the tree (may be implemented by reinitializing the tree)
The basic operations on a binary
search tree take time proportional to the height of the tree. For a complete
binary tree with node n, such operations run in (lg n) worst-case
time. If the tree is a linear chain of n nodes, however, the same operations
takes (n) worst-case time.
1.Searching
Searching
a binary tree for a specific value is a recursive process that we can perform
due to the ordering it imposes. We begin by examining the root. If the value
equals the root, the value exists in the tree. If it is less than the root,
then it must be in the left subtree, so we recursively search the left subtree
in the same manner. Similarly, if it is greater than the root, then it must be
in the right subtree, so we recursively search the right subtree in the same
manner. If we reach an external node, then the item is not where it would be if
it were present, so it does not lie in the tree at all. A comparison may be
made with binary search, which operates in nearly the same way but using random
access on an array instead of following links.
The
Algorithm pseudo-code is:
TREE_SEARCH(x,k)
1. if x=Null or k=key[x]
2. then return x
3. if k < key[x]
4. then return TREE_SEARCH(left[x],k)
5. else return TREE_SEARCH(right[x],k)
6. exit
SEARCH_BINARY_TREE(treenode, value):
1. if treenode is None: return None # failure
left, nodevalue, right = treenode.left, treenode.value, treenode.right
2. if nodevalue > value:
return search_binary_tree(left, value)
elif value > nodevalue:
return search_binary_tree(right, value)
else:
return nodevalue
3. exit
This
operation requires O(log n) time in the average case, but needs Ω(n)
time in the worst-case, when the unbalanced tree resembles a linked list.
2. Insertion
3. Deletion
2. Insertion
Insertion begins with a search; we
search for the value, but if we do not find it, we search the left or right
subtrees as before. Eventually, we will reach an external node, and we add the
value at that position. In other words, we examine the root and recursively
insert the new node to the left subtree if the new value is less than or equal
the root, or the right subtree if the new value is greater than the root.
The insertion algorithm is:
TREE_INSERT(T,x)
1. y <- Null
2. z <- root[T]
3. while z != Null
4. do y <- z
5. if key[x] < key[z]
6. then z <- left[z]
7. else z <- right[z]
8. p[x] <- y
9. if y = Null
10. then root[T] <- x
11. else if key[x] < key[y]
12. then left[y] <- x
13. else right[y] <- x
14. exit.
BINARY_TREE_INSERT(treenode, value):
1. if treenode is None: return (None, value, None)
left, nodevalue, right = treenode.left, treenode.value, treenode.right
2. if nodevalue > value:
return TreeNode(binary_tree_insert(left, value), nodevalue, right)
else:
return TreeNode(left, nodevalue, binary_tree_insert(right, value))
3. exit.
This operation requires O(log n) time in the average case,
but needs Ω(n) time in the worst case.
Another way to explain insertion
is that in order to insert a new node in the tree, its value is first compared
with the value of the root. If its value is less than the root's, it is then
compared with the value of the root's left child. If its value is greater, it
is compared with the root's right child. This process continues until the new
node is compared with a leaf node, and then it is added as this node's right or
left child, depending on its value.
3. Deletion
There are several cases to be
considered:
- Deleting
a leaf: Deleting a node with no children is easy, as we can simply remove
it from the tree.
- Deleting
a node with one child: Delete it and replace it with its child.
- Deleting a node with two children: Suppose the node to be deleted is called N. We replace the value of N with either its in-order successor (the left-most child of the right subtree) or the in-order predecessor (the right-most child of the left subtree).
Deletion |
Once we find either the in-order
successor or predecessor, swap it with N, and then delete it. Since either of
these nodes must have less than two children (otherwise it cannot be the
in-order successor or predecessor), it can be deleted using the previous two
cases.
In a good implementation, it is
generally recommended to avoid consistently using one of these nodes, because
this can unbalance the tree.
BINARY_TREE_DELETE(treenode, value):
1. if treenode is None: return None # Value not found
left, nodevalue, right = treenode.left, treenode.value, treenode.right
2. if nodevalue == value:
if left is None:
return right
elif right is None:
return left
else:
maxvalue, newleft = find_remove_max(left)
return TreeNode(newleft, maxvalue, right)
elif value < nodevalue:
return TreeNode(binary_tree_delete(left, value), nodevalue, right)
else:
return TreeNode(left, nodevalue, binary_tree_delete(right, value))
3. exit
FIND_REMOVE_MAX(treenode):
1. left, nodevalue, right = treenode.left, treenode.value, treenode.right
2. if right is None: return (nodevalue, left)
else:
(maxvalue, newright) = find_remove_max(right)
return (maxvalue, (left, nodevalue, newright))
3. exit.
Although this operation does not
always traverse the tree down to a leaf, this is always a possibility; thus in
the worst case, it requires time proportional to the height of the tree. It
does not require more even when the node has two children, since it still
follows a single path and visits no node twice.
Deletion in binary
search trees: An example
Delete 4 (delete
leaf node)
Delete 10 (Node with no left subtree) |
Delete 13 (node with both right and left subtrees) |
4. Traversal
Once the binary search tree has
been created, its elements can be retrieved in order by recursively traversing
the left subtree, visiting the root, then recursively traversing the right
subtree. The tree may also be traversed in pre order or post order traversals.
TRAVERSE_BINARY_TREE(treenode):
1. if treenode is None: return
2. left, nodevalue, right = treenode
3. traverse_binary_tree(left)
4. visit(nodevalue)
5. traverse_binary_tree(right)
6. exit.
Traversal requires Ω(n) time,
since it must visit every node. This algorithm is also O(n), and so
asymptotically optimal.
5. Sort
A binary search tree can be used
to implement a simple but inefficient sort algorithm. Similar to insertion sort,
we insert all the values we wish to sort into a new ordered data structure, in
this case a binary search tree, then traverse it in order, building our result:
BUILD_BINARY_TREE(values):
1. tree = None
2. for v in values:
tree = binary_tree_insert(tree, v)
3. return tree
TRAVERSE_BINARY_TREE(treenode):
1. if treenode is None: return []
else:
left, value, right = treenode
2. return (traverse_binary_tree(left) + [value] + traverse_binary_tree(right))
The worst-case time of build_binary_tree is Ω(n2) — if you feed
it a sorted list of values, it chains them into a linked list with no left
subtrees. For example, build_binary_tree([1, 2, 3, 4, 5]) yields the tree
(None, 1, (None, 2, (None, 3, (None, 4, (None, 5, None))))). There are a
variety of schemes for overcoming this flaw with simple binary trees; the most
common is the self-balancing binary search tree. If this same procedure is done
using such a tree, the overall worst-case time is O(nlog n), which is asymptotically
optimal for a comparison sort.