1 |
ajm |
1.1 |
I-Scream Specification Outline (Realtime side only) |
2 |
|
|
=================================================== |
3 |
|
|
|
4 |
|
|
ajm4, 30/10/2000 |
5 |
tdb |
1.2 |
tdb1, 30/10/2000 |
6 |
ajm |
1.1 |
|
7 |
|
|
System Component Startup |
8 |
tdb |
1.2 |
************************ |
9 |
ajm |
1.1 |
|
10 |
|
|
CORE |
11 |
tdb |
1.2 |
---- |
12 |
ajm |
1.1 |
|
13 |
|
|
|
14 |
|
|
Client Interface |
15 |
tdb |
1.2 |
---------------- |
16 |
ajm |
1.1 |
|
17 |
|
|
|
18 |
|
|
Filter |
19 |
tdb |
1.2 |
------ |
20 |
|
|
The filter is broken down into three main subcomponents. |
21 |
|
|
|
22 |
|
|
- Filter Manager |
23 |
|
|
The Filter Manager is responsible for managing which |
24 |
|
|
filters are used by the hosts. The Filter Manager is |
25 |
|
|
available at a "well known" location which is pre- |
26 |
|
|
programmed into the hosts. The Filter Manager is |
27 |
|
|
responsible for creating and managing the other |
28 |
|
|
components of the filter system. |
29 |
|
|
|
30 |
|
|
- Main Filter |
31 |
|
|
The Main Filter is the single point that links back |
32 |
|
|
into the CORE of the system. It will connect to the |
33 |
|
|
DBI and the CLI to deliver data. |
34 |
|
|
|
35 |
|
|
- Filters |
36 |
|
|
There can be multipler Filters, and these are the |
37 |
|
|
"front line" to the hosts. They all link back to the |
38 |
|
|
Main Filter to send data into the system. It is |
39 |
|
|
possible to run these Filters on any machine, allowing |
40 |
|
|
management of data flow. |
41 |
|
|
|
42 |
|
|
At startup a Filter Manager object is activated at the "well |
43 |
|
|
known" location (probably a given machine name at a |
44 |
|
|
predefined port). The Filter Manager will create an instance |
45 |
|
|
of the Main Filter, and any Filters under it's control. |
46 |
|
|
Through some mechanism the other Filters, elsewhere on the |
47 |
|
|
network, will register with the Filter Manager. The |
48 |
|
|
Filter Manager will need to tell each Filter the location |
49 |
|
|
of the Main Filter upon registering. The Filter Manager will |
50 |
|
|
then be in a position to receive connections from hosts and |
51 |
|
|
pass them off to Filters. |
52 |
ajm |
1.1 |
|
53 |
|
|
System Running State |
54 |
tdb |
1.2 |
******************** |
55 |
ajm |
1.1 |
|
56 |
|
|
CORE |
57 |
tdb |
1.2 |
---- |
58 |
ajm |
1.1 |
|
59 |
|
|
|
60 |
|
|
Client Interface |
61 |
tdb |
1.2 |
---------------- |
62 |
ajm |
1.1 |
|
63 |
|
|
|
64 |
|
|
Filter |
65 |
tdb |
1.2 |
------ |
66 |
|
|
When a host first loads up it knows where to locate the |
67 |
|
|
Filter Manager because it's located at a "well known" |
68 |
|
|
location. The host will fire up a TCP connection to the |
69 |
|
|
Filter Manager to announce itself. The Filter Manager will |
70 |
|
|
use some method (logically) to allocate a Filter to the |
71 |
|
|
host. The Filter Manager should base this decision on |
72 |
|
|
various factors, such as the load on the selection of |
73 |
|
|
filters, and possibly the location in relation to the host. |
74 |
|
|
The host will then be directed to this Filter for all |
75 |
|
|
further communications. |
76 |
|
|
|
77 |
|
|
As the system runs the host will send data with (maybe) UDP |
78 |
|
|
to the Filter (that it's been allocated to). This choice has |
79 |
|
|
been made because it puts less onus on the host to make the |
80 |
|
|
connection, rather the data is just sent out. However, to |
81 |
|
|
ensure that the data isn't just disappearing into the depths |
82 |
|
|
of the network a periodic heartbeat will occur (at a |
83 |
|
|
predefined interval) over TCP to the Filter. This heartbeat |
84 |
|
|
can be used as a form of two-way communication, ensuring |
85 |
|
|
that everything is ok, and if required, to send any |
86 |
|
|
information back to the host. This heartbeat must occur |
87 |
|
|
otherwise the server may infer the host has died. |
88 |
|
|
|
89 |
|
|
This could link in to alerting. An amber alert could be |
90 |
|
|
initiated for a host if the server stops receiving UDP |
91 |
|
|
packets, but an red alert be raised if the heartbeat doesn't |
92 |
|
|
occur. |
93 |
|
|
|
94 |
|
|
If, for some reason, the Filter were to disappear the host |
95 |
|
|
should fall back on it's initial discovering mechanism - ie. |
96 |
|
|
contacting the Filter Manager at it's "well known" location. |
97 |
|
|
The host should report that it's lost it's Filter (so the |
98 |
|
|
Filter Manager can investigate and remove from it's list of |
99 |
|
|
Filters), and then the Filter Manager will reassign a new |
100 |
|
|
Filter to the host. Communication can then continue. |
101 |
|
|
|
102 |
|
|
The idea of plugins to the Filters has been introduced. |
103 |
|
|
These plugins will implement a predefined plugin interface, |
104 |
|
|
and can be chained together at the Filter. Using the |
105 |
|
|
interface we can easily add future plugins that can do |
106 |
|
|
anything from parsing new data formats, to implementing |
107 |
|
|
encryption algorithms. The Filter will pass incoming data to |
108 |
|
|
each plugin in turn that it has available, and then finally |
109 |
|
|
pass the data on to the Main Filter. The Filter need not |
110 |
|
|
have any real knowledge about the content of the data. |