[Battlemesh] tests for WBMv3

L. Aaron Kaplan aaron at lo-res.org
Wed May 26 11:47:47 CEST 2010


On May 26, 2010, at 11:19 AM, Luca Tavanti wrote:

> Hi Aaron,
> 
> How can you be sure that static routes are the optimum?

Hm, maybe I was not clear (I hinted it at in the parentheses):
by designing the test network so that the other links are worse (of course you need to confirm this by measurement).
Then as a result you have a network where your desinged links are optimum (optimum according to some criterium of course).
Then you do static routes there.

So - as a result you have a benchmark against which you can compare different protocols in a *repeatable* way.
The problem that I see with many of the WBM tests: it is nice to meet and to discover bugs while testing.
But it is really hard to thoroughly convince oneself or somebody else that this measurement result is really true and repeatable.
(changes in SNR, interference, ... you name it).

Therefore I suggested to try to aim at _some kind of_ reference against which we should compare any protocol.

See my point?




> This can esily be seen on a very simple topology, but how can you define the optimum routes when the topology gets bigger (and the physical parameters are hardly known a priori)?

Well, optimum route is harder, but optimum links can be designed (think attenuators , lower txpower, shielding, etc).

And then there is another issue: self interference from side lobes from other antennas. This has an interesting effect:
let's assume routing protocol A choses a path a-b-c-d. And protocol B choses a-b-x-y-d ok?
Now... let's assume all these links are perfect or pretty good. We tested each link and they are very good. But we tested them individually. Now suddenly a-b-x-y-d starts to stream TCP traffic and x interferes with a (same channel and side lobe) . So our measurements will say, protocol B is worse than A.
Why? Because it took a different path.

BUT: actually we did not test the protocol per se, but we tested the metric function of the protocol. It was not self-interference aware. But OLSR (and I believe also Babel?) can be extended with any metric. So.. you see... again... it was a result based on the specific test network and this result has nothing to say for other networks. 

Another issue: I would agree first on a hello / originator interval and settings. OLSR for example can be tuned to be rather slow or very fast in convergence. So if you will test convergence... the tests always must document the settings. 

> 
> As for the layer 1 and 2, and I agree with you.
> At least a rough measure of the RSSI/SNR should be done along with the tests (even though, being in a camping, I do not expect much interference...)
> 
absolutely!

My main point above is:
measuring properly is hard. I suggest documenting rigorously the testing, the network setup , the channels , the layer 1 and 2 measurements, the protocol parameters etc. I am still very skeptical regarding reproducibility of the results.

Best,
Aaron
OE1RFC ;-)





More information about the Battlemesh mailing list