forked from swfarnsworth/n2c2_2018_task2_significance
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathart.py
More file actions
1226 lines (949 loc) · 42.8 KB
/
art.py
File metadata and controls
1226 lines (949 loc) · 42.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
'''art.py -- Approximate Randomization Test
This script carries out a significance test on the output of an
instance-based machine learner based on the theory of
approximate randomization tests:
Eric W. Noreen, Computer-intensive Methods for Testing Hypotheses: An Introduction, John Wiley & Sons, New York, NY, USA, 1989.
No assumptions are made on the distribution of the variables. The only assumption made is that there are
no inter-instance dependencies, i.e. knowing the class label of 1 instance should not help
knowing the class label of another instance. This assumption is violated in the output from the MBT (memory-based tagger).
A nice example of why no inter-instance dependencies should be present is in:
Alexander Yeh, More accurate tests for the statistical significance of result differences,
in: Proceedings of the 18th International Conference on Computational Linguistics, Volume 2,
pages 947-953, 2000.
TEST STATISTICS
At the moment, the test statitics tested are differences in macro-recall, macro-precision, macro-f-score, micro-f-score, and accuracy.
This can be changed by changing the getscores() function.
DEPENDENCIES
This script depends on confusionmatrix.py and combinations.py (www.clips.ua.ac.be/~vincent/software.html)
and optionally scipy (www.scipy.org).
Copyright (c) 2013 CLiPS. All rights reserved.
# License: GNU General Public License, see http://www.clips.ua.ac.be/~vincent/scripts/LICENSE.txt
#Speed improvements by Sam Henry (July 2019)
'''
__author__="Vincent Van Asch"
__date__="September 2013"
__version__="3.0.3"
__url__ = 'http://www.clips.ua.ac.be/~vincent/software.html'
import sys, os, time
import random
import getopt
import datetime
from math import pow
from scipy.stats import binom_test
import confusionmatrix
import combinations
def loginfo(s):
print(('%s: %s' %(time.strftime('%d/%m/%Y %H:%M:%S'), s)))
def fread(fname, index=None, sep=None, encoding='utf8'):
'''Reads in files as lists.
sep: feature separator
index: if None, the elements of the output list are the full lines
if int, the elements of the output list are string at position index
if tuple, the elements of the output list slices from the full lines (as lists)
'''
output=[]
with open(os.path.abspath(os.path.expanduser(fname)), 'rU') as f:
for l in f:
line = l.strip()
if line:
line = line.decode(encoding)
if index is None:
output.append(line)
else:
line = line.split(sep)
if isinstance(index, int):
output.append(line[index])
elif isinstance(index, (list, tuple)):
if len(index) != 2: raise ValueError('index should have length 2 not %d' %len(index))
output.append(line[index[0]:index[1]])
else:
raise TypeError('index should be None, int or tuple')
return output
def strata_read(fname, sep=None, encoding='utf8'):
out={}
with open(os.path.abspath(os.path.expanduser(fname)), 'rU') as f:
for l in f:
line = l.strip().decode(encoding)
if line:
parts = line.split(sep)
stratum = parts[0]
group = parts[1]
data = [float(x) for x in parts[2:]]
if stratum in list(out.keys()):
out[stratum][group] = data
else:
out[stratum] = {group:data}
return out
MBTSEP = '\x13'
def mbtread(fname, sep="<utt>"):
'''Reads in the sentences from an mbt format file.
sep: sentence seperator (empty lines are also considered as sentence
boundaries)
Returns a list of strings.
Each string are the concatenated token labels from 1 sentence'''
output=[]
sentence=[]
with open(os.path.abspath(os.path.expanduser(fname)), 'rU') as f:
for l in f:
line = l.strip()
if line and line != sep:
sentence.append(line.split()[-1])
else:
if sentence: output.append(MBTSEP.join(sentence))
sentence=[]
if sentence: output.append(MBTSEP.join(sentence))
return output
def readtraining(fname, index=-1, sep=None):
'''Reads in training and returns a dictionary with the distribution of the
classes in training'''
d={}
for label in fread(fname, sep=sep, index=index):
try:
d[label]+=1
except KeyError:
d[label] = 1
return d
def signtest(gold, system1, system2):
'''Sign test for labeling accuracy'''
assert len(gold) == len(system1) == len(system2)
# Get all number where system1 is correct and the other false
s1correct=0
s2correct=0
wrong=0
for g, s1, s2 in zip(gold, system1, system2):
if g==s1:
s1correct+=1
elif g==s2:
s2correct+=1
else:
wrong+=1
# The total number of predictions that are only correctly predicted
# by 1 system
total = s1correct+s2correct
# make sure we test the smallest because of
# bug with unequal N in binom_test
correct = min([s1correct, s2correct])
return binom_test(correct, total)
def termsigntest(gold, system1, system2):
'''Sign test for term extraction recall'''
print('WARNING: this function has not been validated')
# True postives for only 1 system
s1correct=0
s2correct=0
fn=0
for t in gold:
if t in system1:
if t not in system2:
s1correct+=1
elif t in system2:
s2correct+=1
else:
fn +=1
# The total number of predictions that are only correctly predicted
# by 1 system
total = s1correct+s2correct
return binom_test(s1correct, total)
def getscores(gold, system, training=None):
'''
Takes a gold and system list and returns a dictionary with
macro-recall, macro-precision, macro-f-score, micro-f-score, accuracy.
If training is given it uses the class label counts from training to compute the scores.
gold: a list of class labels
system: a list of class labels (in the same order as gold)
training: a dictionary:
key: class label
value: number of occurrences
Returns a dictionary:
key: performance measure name
value: performance score
'''
# Get confusion matrix
assert len(gold) == len(system)
# light mode for speed
cm = confusionmatrix.ConfusionMatrix(light=True)
# Add training
if training:
for k, v in list(training.items()):
for i in range(v):
cm.add_training([k])
# Add data
for g, s in zip(gold, system):
cm.single_add(g, s)
output={'macro-av. recall': cm.averaged(level=confusionmatrix.MACRO, score=confusionmatrix.RECALL, training=bool(training)), \
'macro-av. precision': cm.averaged(level=confusionmatrix.MACRO, score=confusionmatrix.PRECISION, training=bool(training)), \
'macro-av. f-score': cm.averaged(level=confusionmatrix.MACRO, score=confusionmatrix.FSCORE, training=bool(training)), \
'micro-av. f-score': cm.averaged(level=confusionmatrix.MICRO, score=confusionmatrix.FSCORE, training=bool(training)), \
'micro-av. precision': cm.averaged(level=confusionmatrix.MICRO, score=confusionmatrix.PRECISION, training=bool(training)), \
'micro-av. recall': cm.averaged(level=confusionmatrix.MICRO, score=confusionmatrix.RECALL, training=bool(training)), \
'lfb-micro-av. f-score': cm.averaged(level=confusionmatrix.MICROt, score=confusionmatrix.FSCORE, training=bool(training)), \
'lfb-micro-av. precision': cm.averaged(level=confusionmatrix.MICROt, score=confusionmatrix.PRECISION, training=bool(training)), \
'lfb-micro-av. recall': cm.averaged(level=confusionmatrix.MICROt, score=confusionmatrix.RECALL, training=bool(training)), \
'accuracy': cm.accuracy()}
return output
def getscores2(gold, system, training=None):
#print("Calculating P:" + str(datetime.datetime.now()))
#overlap = float(len([i for i in system if i in gold]))
overlap = float(len(system.intersection(gold)))
P = overlap / len(system)
R = overlap / len(gold)
if P==0 or R==0:
F=0.0
else:
F = 2*P*R/(P+R)
return {'recall': R, 'precision':P, 'f1-score': F}
def getscoresmbt(gold, system, training=None):
'''Returns the mbt accuracy for the sentence'''
correct=0
total=0
for g, s in zip(gold, system):
g = g.split(MBTSEP)
s = s.split(MBTSEP)
assert len(g) == len(s)
total += len(g)
for gi, si in zip(g, s):
if gi == si:
correct+=1
return {'accuracy': correct/float(total)}
def getscoresmbtmulti(gold, system, training=None, sep='_'):
'''Returns scores for mbt'''
# Create the yielder
def reader(gold, system):
for g,s in zip(gold, system):
g = g.split(MBTSEP)
s = s.split(MBTSEP)
assert len(g) == len(s)
for gi, si in zip(g, s):
gi = set(gi.split(sep))
si = set(si.split('_'))
yield gi, si
r = reader(gold, system)
cm = confusionmatrix.ConfusionMatrix(compute_none=True)
for g, p in r:
cm.add(list(g), list(p))
out={}
for label in cm.labels:
out[label] = cm.fscore(label)
out['micro-fscore']=cm.averaged(level=confusionmatrix.MICRO, score=confusionmatrix.FSCORE)
return out
def average(dumy, values, training=None):
return {'mean': sum(values)/float(len(values))}
def teststatistic(gold, system1, system2, training=None, scoring=getscores, absolute=True):
'''Takes all lists and returns the value for 5 test statistics:
macro-recall, macro-precision, macro-f-score, micro-f-score, accuracy
scoring: the function that calcutates the performances
absolute: if True : the absolute difference of system1 performance and system2 performance
if False: system1 performance minus system2 performance
'''
# Get the reference performance difference
scores1 = scoring(gold, system1, training=training)
scores2 = scoring(gold, system2, training=training)
# Compute the differences between system1 and system2
diffs={}
for k in set(list(scores1.keys())+list(scores2.keys())):
diff = scores1.get(k, 0)-scores2.get(k, 0)
if absolute: diff = abs(diff)
diffs[k] = diff
return diffs
def distribute(s):
'''Distribute the elements of s randomly over 2 lists'''
batch1=[]; batch2 =[]
data=s[:]
while data:
d = data.pop()
b = random.choice([batch1, batch2])
b.append(d)
assert len(data) == 0, data
return set(batch1), set(batch2)
def getprobabilities(ngecounts, N, add=1, verbose=False):
'''Calculates the probabilities from the ngecounts.
The probabilities are calculated as:
(neg + add)/(N + add)
Returns a dictionay:
keys: performance name
value: probaility
ngecounts: a dictionary:
keys: performance name
value: nge
N: number of trials
add: integer
'''
# Calculate probabilities
probs={}
for k, nge in list(ngecounts.items()):
prob = (nge + add)/float(N + add)
probs[k] = prob
if verbose:
print('Probabilities for accepting H0:')
names = list(probs.keys())
names.sort()
for name in names:
print((' %-23s: %.5g' %(name, probs[name])))
return probs
def get_alternatives(l):
'''The length of the outputs'''
# number of bins
nbins = int(pow(2, l))
# Fill the bins
bins=[[] for i in range(nbins)]
for i in range(l):
switchpoint = pow(2, i)
filler=False
for j, bin in enumerate(bins):
if not j%switchpoint:
filler = not filler
bin.append(int(filler))
return bins
def exactlabelingsignificance(gold, system1, system2, verbose=False, training=None, scoring=getscores, common=[], common_gold=[]):
'''Carries out exact randomization'''
# number of permutations
N = pow(2, len(gold))
if verbose: loginfo('%d permutations' %N)
if N > 5000000: raise ValueError('The number of permutations is too big. Aborting.')
# the reference test statitsics
refdiffs = teststatistic(gold.union(common_gold), system1.union(common), system2.union(common), training=training, scoring=scoring)
# Get all combinations
size = len(gold)
count=0
systems = [system1, system2]
ngecounts = {}
if N >= 10:
nom = int(N/10.0)
else:
nom=1
alternatives = get_alternatives(size)
while alternatives:
alt = alternatives.pop()
count+=1
shuffle1 = [systems[k][j] for j,k in enumerate(alt)]
shuffle2 = [systems[1-k][j] for j,k in enumerate(alt)]
# the test statistics
diffs = teststatistic(gold+common_gold, shuffle1+common, shuffle2+common, training=training, scoring=scoring)
if verbose and not (count%nom): loginfo('Calculated permutation %d/%d' %(count, N))
for k in list(refdiffs.keys()):
pseudo = diffs[k]
actual = refdiffs[k]
if pseudo >= actual:
ngecounts[k] = ngecounts.get(k, 0) + 1
elif k not in list(ngecounts.keys()):
ngecounts[k]=0
assert count == N
assert set(ngecounts.keys()) == set(refdiffs.keys())
# Calculate probabilities
return getprobabilities(ngecounts, N, add=0, verbose=True)
def labelingsignificance(gold, system1, system2, N=1000, verbose=False, training=None, scoring=getscores, show_probs=True, common=[], common_gold=[]):
'''Calculate approximate randomization test for class labeling experiments
Returns the probabilities for accepting H0 for
macro-recall, macro-precision, macro-fscore, micro-fscore, accuracy
training: the counts of the class labels in the training file
N: number of iterations
'''
# the reference test statitsics
refdiffs = teststatistic(gold+common_gold, system1+common, system2+common, training=training, scoring=scoring)
# start shuffling
source = [[s1,s2] for s1,s2 in zip(system1, system2)]
if N >= 10:
nom = int(N/10.0)
else:
nom=1
ngecounts={}
for i in range(N):
shuffle1=[]
shuffle2=[]
for preds in source:
random.shuffle(preds)
shuffle1.append(preds[0])
shuffle2.append(preds[1])
# the test statistics
diffs = teststatistic(gold+common_gold, shuffle1+common, shuffle2+common, training=training, scoring=scoring)
# see whether the shuffled system performs better than the originals
for k in list(refdiffs.keys()):
pseudo = diffs[k]
actual = refdiffs[k]
if pseudo >= actual:
ngecounts[k] = ngecounts.get(k, 0) + 1
elif k not in list(ngecounts.keys()):
ngecounts[k]=0
if verbose and not ((i+1)%nom):
loginfo('Calculated shuffle %d/%d' %(i+1, N))
#getprobabilities(ngecounts, i+1, add=1, verbose=True)
# Sign test check
if scoring.__name__ == 'getscores':
try:
s = signtest(gold, system1, system2)
if verbose: loginfo('Sign-test probability: %.4g' %s)
except NameError:
pass
assert set(ngecounts.keys()) == set(refdiffs.keys())
# Calculate probabilities
return getprobabilities(ngecounts, N, add=1, verbose=show_probs)
def exacttermsignificance(gold, system1, system2, verbose=False, absolute=False):
'''Compute exact term significance'''
# Take unique terms
source = []
doubles=[]
for t in system1.union(system2):
if t in system1 and t not in system2:
source.append(t)
elif t not in system1 and t in system2:
source.append(t)
else:
doubles.append(t)
# The number of combinations
N=1
for i in range(len(source)+1):
N += combinations.ncombinations(len(source), i)
if verbose:
loginfo('%d combinations' % N)
if N > 5000000:
raise ValueError('The number of permutations is too big. Aborting.')
# the reference test statitsics
refdiffs = teststatistic(gold, system1, system2, scoring=getscores2, absolute=absolute)
if N >= 10:
nom = int(N/10.0)
else:
nom=1
count=0
ngecounts={}
for i in range(len(source)+1):
for subset in combinations.subsets(source, i):
count+=1
shuffle1 = list(subset)
shuffle2 = []
for x in source:
if x not in shuffle1:
shuffle2.append(x)
#print shuffle1, shuffle2, doubles
# the test statistics
diffs = teststatistic(gold, set(shuffle1+doubles), set(shuffle2+doubles), scoring=getscores2, absolute=absolute)
# see whether the shuffled system performs better than the originals
for k in list(refdiffs.keys()):
pseudo = diffs[k]
actual = refdiffs[k]
if pseudo >= actual:
ngecounts[k] = ngecounts.get(k, 0) + 1
elif k not in list(ngecounts.keys()):
ngecounts[k]=0
if verbose and not ((count)%nom):
loginfo('Calculated combination %d/%d' %(count, N))
#getprobabilities(ngecounts, i+1, add=1, verbose=verbose)
#assert count == N
assert set(ngecounts.keys()) == set(refdiffs.keys())
# Calculate probabilities
probs = getprobabilities(ngecounts, N, add=0, verbose=True)
return probs
def termsignificance(gold, system1, system2, N=10000, verbose=False, absolute=False):
'''Calculate randomized term significance'''
# Only uniques terms in a system
assert len(gold) == len(gold)
assert len(system1) == len(system1)
assert len(system2) == len(system2)
# Get all terms that are unique for a system
source = []
doubles=[]
news1=[]; news2=[]
for t in system1.union(system2):
if t in system1 and t not in system2:
source.append(t)
news1.append(t)
elif t not in system1 and t in system2:
source.append(t)
news2.append(t)
else:
doubles.append(t)
# the reference test statitsics
refdiffs = teststatistic(gold, system1, system2, scoring=getscores2, absolute=absolute)
if N >= 10:
nom = int(N/10.0)
else:
nom=1
ngecounts={}
for i in range(N):
shuffle1, shuffle2 = distribute(source)
# the test statistics
diffs = teststatistic(gold, shuffle1.union(doubles), shuffle2.union(doubles), scoring=getscores2, absolute=absolute)
# see whether the shuffled system performs better than the originals
for k in list(refdiffs.keys()):
pseudo = diffs[k]
actual = refdiffs[k]
if pseudo >= actual:
ngecounts[k] = ngecounts.get(k, 0) + 1
elif k not in list(ngecounts.keys()):
ngecounts[k]=0
if not ((i+1)%nom):
loginfo('Calculated shuffle %d/%d' %(i+1, N))
#getprobabilities(ngecounts, i+1, add=1, verbose=verbose)
assert set(ngecounts.keys()) == set(refdiffs.keys())
# Calculate probabilities
probs = getprobabilities(ngecounts, N, add=1, verbose=True)
return probs
def getdifference(system1, system2, gold=None):
'''
Takes lists of labels and returns lists with only those
entries for which s1!=s2.
If the list gold is given, it also returns only the gold labels
for those elements.
'''
new_system1=[]
new_system2=[]
new_gold=[]
rest1=[]
rest2=[]
common_gold=[]
G=True
if gold is None:
G=False
gold = system1[:]
if len(system1) != len(system1) != len(gold):
raise ValueError('Input lists should have the same length')
for g, s1, s2 in zip(gold, system1, system2):
if s1!=s2:
new_system1.append(s1)
new_system2.append(s2)
if G:
new_gold.append(g)
else:
rest1.append(s1)
rest2.append(s2)
common_gold.append(g)
if not G: new_gold=[]
assert rest1 == rest2
return new_system1, new_system2, new_gold, rest1, common_gold
def main(gold, system1, system2, verbose=False, N=10000, exact_threshold=20, training=None, scoring=getscores):
print ("IN MAIN 0\n");
'''
exact_threshold: the maximum number of instance to calculate exact randomization instead of approximate
'''
# Check
if not (len(gold) == len(system1) == len(system2)):
raise ValueError('There should be an equal number of non-empty lines in each input file.')
# Shuffle only those instances that have a different class label
news1, news2, newgold, common, common_gold = getdifference(system1, system2, gold)
if verbose:
for i,s in enumerate([system1, system2]):
scores = scoring(gold, s, training=training)
lines=['Scores for system%d:' %(i+1)]
keys = list(scores.keys())
keys.sort()
for k in keys:
lines.append(' %-23s : %.4f' %(k, scores[k]))
print(('\n'.join(lines)))
# only shuffle difference: quicker and same probability results
gold = newgold
system1 = news1
system2 = news2
total_uniq = len(gold)
# The number of instances with different predictions
if verbose: loginfo('Found %d predictions that are different for the 2 systems' %(total_uniq))
# number of permutations
try:
np = pow(2, len(gold))
except OverflowError:
np = 1000000001
if np > 1000000000:
loginfo('Number of permutations: more than 1,000,000,000')
else:
loginfo('Number of permutations: %d' %np)
if np <= N and total_uniq > exact_threshold:
loginfo('NOTE:')
loginfo('The number of permutations is lower than the number of shuffles.')
loginfo('You may want to calculate exact randomization. To do this')
loginfo('set option -t higher than %d.' %total_uniq)
if total_uniq <= exact_threshold:
if verbose: loginfo('This is equal or less than the %d predictions threshold: calculating exact randomization' %(exact_threshold))
probs = exactlabelingsignificance(gold, system1, system2, verbose=verbose, training=training, scoring=scoring, common=common, common_gold=common_gold)
else:
probs = labelingsignificance(gold, system1, system2, N=N, verbose=verbose, training=training, scoring=scoring, common=common, common_gold=common_gold)
if verbose: loginfo('Done')
return probs
def main2(gold, system1, system2, verbose=False, N=1048576, absolute=True, exact_threshold=10):
''' the main for term extraction'''
# No doubles
#news1 = list(set(system1))
#news2 = list(set(system2))
#newgold = list(set(gold))
news1 = set(system1)
news2 = set(system2)
newgold = set(gold)
gold = newgold
system1 = news1
system2 = news2
if verbose:
for i,s in enumerate([system1, system2]):
scores = getscores2(gold, s, training=training)
lines=['Scores for system%d:' %(i+1)]
keys = list(scores.keys())
keys.sort()
for k in keys:
lines.append(' %-23s : %.4f' %(k, scores[k]))
print(('\n'.join(lines)))
# the number of terms that occur only in s1 or in s2
union=system1.union(system2)
intersect = system1.intersection(system2)
total_uniq = len(union) - len(intersect)
if verbose: loginfo('Found %d predictions that are different for the 2 systems' %(total_uniq))
if total_uniq < exact_threshold:
if verbose: loginfo('This is equal of less than the %d terms threshold: calculating exact randomization' %(exact_threshold))
probs = exacttermsignificance(gold, system1, system2, verbose=verbose, absolute=absolute)
else:
probs= termsignificance(gold, system1, system2, N=N, verbose=verbose, absolute=absolute)
if verbose: loginfo('Done')
return probs
def main3(data, verbose=False, N=1048576, absolute=True):
'''For stratified shuffling'''
# The groups
scoring_func=average
groups = list(data[list(data.keys())[0]].keys())
groups.sort()
assert len(groups) == 2
if verbose:
strata = list(data.keys())
strata.sort()
stext = 'a'
if len(strata) == 1: stext='um'
loginfo('Found %d strat%s: %s' %(len(data), stext, ', '.join(strata)))
loginfo('')
loginfo('Computing %d shuffles' %N)
loginfo('H0: there is no absolute difference between the means of %s and %s' %tuple(groups))
loginfo(' Commonly, you reject H0 if the probability drops below')
loginfo(' a predefined significance level, e.g 0.05.')
loginfo('-'*50)
systems={groups[0]:[], groups[1]:[]}
for stratum, d in list(data.items()):
for g in groups:
systems[g] += d[g]
if verbose:
for g in groups:
s = systems[g]
scores = scoring_func(None, s)
lines=['Scores for group %s:' %(g)]
keys = list(scores.keys())
keys.sort()
for k in keys:
lines.append(' %-23s : %.4f' %(k, scores[k]))
print(('\n'.join(lines)))
# Reference
refdiffs = teststatistic(None, systems[groups[0]], systems[groups[1]], training=None, scoring=average, absolute=absolute)
if N >= 10:
nom = N // 10
else:
nom=1
# Start shuffling
ngecounts={}
for i in range(N):
shuffled={}
for stratum, d in list(data.items()):
values = d[groups[0]] + d[groups[1]]
n1 = len(d[groups[0]])
n2 = len(d[groups[1]])
labels = [groups[0]]*n1+ [groups[1]]*n2
random.shuffle(labels)
for l, v in zip(labels, values):
shuffled[l] = shuffled.get(l ,[]) + [v]
# the test statistics
diffs = teststatistic(None, shuffled[groups[0]], shuffled[groups[1]], scoring=scoring_func, absolute=absolute)
# see whether the shuffled system performs better than the originals
for k in list(refdiffs.keys()):
pseudo = diffs[k]
actual = refdiffs[k]
if pseudo >= actual:
ngecounts[k] = ngecounts.get(k, 0) + 1
elif k not in list(ngecounts.keys()):
ngecounts[k] = 0
if verbose and not ((i+1)%nom):
loginfo('Calculated shuffle %d/%d' %(i+1, N))
assert set(ngecounts.keys()) == set(refdiffs.keys())
# Calculate probabilities
probs = getprobabilities(ngecounts, N, add=1, verbose=True)
return probs
# ========================================================================================================================
# TESTING
# ========================================================================================================================
def Yeh():
'''Creates 3 synthetic files to reproduce the results from Section3.3 of
Alexander Yeh, More accurate tests for the statistical significance of result differences,
in: Proceedings of the 18th International Conference on Computational Linguistics, Volume 2,
pages 947-953, 2000.
The filenames are yeh.gold, yeh.s1 and yeh.s2
Running the following command reproduces the reported results:
$ python art.py -c yeh.gold -n1048576 -v -r -a yeh.s1 yeh.s2
Probabilities for accepting H0:
f1-score : 0.014643
precision : 0.97995
recall : 0.00010204
Note that the test statistic is system1-system2, so for precision the
probability from Yeh is 1 - 0.97995 = 0.02005
'''
gold = 'yeh.gold'
s1 = 'yeh.s1'
s2 = 'yeh.s2'
# The gold standard
with open(gold, 'w') as f:
for i in range(103):
f.write('%d\n' %i)
# System 1: R45.6 P49.5 F47.5
with open(s1, 'w') as f:
for i in range(19+28):
f.write('%d\n' %i) # retrieved by both and system1
for i in range(5):
f.write('B%d\n' %(i)) # spurious retrieved by both
for i in range(43):
f.write('one%d\n' %(i)) # spurious retrieved by system1
# System 2: R24.3 P64.1 F35.2
with open(s2, 'w') as f:
for i in range(19+6):
if i < 19:
f.write('%d\n' %i) # retrieved by both
else:
f.write('%d\n' %(i+28)) # retrieved by system2
for i in range(5):
f.write('B%d\n' %(i)) # spurious retrieved by both
for i in range(9):
f.write('two%d\n' %(i)) # spurious retrieved by system1
print(('Written:', gold, s1, s2))
# ==================================================================================================================
if __name__ == '__main__':
def _usage():
print('''Approximate Randomization testing (version %s))
This script can be used to assess the significance for differences in recall, precision,
f-score, and accuracy for two machine learner outputs.
The H0 hypothesis tested is:
There is no difference between SYSTEM1 and SYSTEM2 for a given score.
This hypothesis is tested for: macro-av. recall, macro-av. precision, macro-av. f-score, micro-av. f-score, and accuracy.
The output is a set of probabilities for accepting H0. If this probability is lower
than a predefined level (e.g. 0.05) then H0 is rejected.
USAGE
./art.py [-m] [-n int] [-c <gold-standard>] [-s sep] [-t int] [-T training] [-r] [-a] [-h] [-H] [-v] <output_a> <output_b>
OPTIONS
-n : Number of shuffles (default: 10000)
-c : Change the expected format for the input files, see FORMAT below
-s : Feature separator (default: whitespace)
-t : Define the maximal number of instances that can be in the input files
for exact randomization. The lower this value, the quicker approximate
randomization is carried out. If set to 0, approximation is always
carried out. Note that exact randomization for input files with
only 10 instances can already take a long time. (default: 10)
-T : Path to the training file used by both systems, see TRAINING below
-r : term extraction significance testing instead of labeling significance
testing, see TERM EXTRACTION below. -c is mandatory; -T is ignored
-a : use the actual difference instead of the absolute difference when
calculating test extraction significance
-m : test for MBT experiments, see MBT below. -c is obligatory.
-h : Print help
-H : Print more background information
-v : Verbose processing
FORMAT
Per default, the script expects 2 instance files tagged with
different classifiers.
- Each instance should be on a new line.
- All features and class labels should be separated with the feature
separator. This can be set with the -s option.
- An instance is a list of features; followed by the gold standard class label;
followed by the class label as predicted by the classifier (=standard Timbl output)
If option -c is set, an extra input file with the gold-standard class labels
should be provided. The format of all input files should be:
- one class label per new line (and nothing else)
- class labels belonging to the same instance should
be on the same line in all 3 input files.
VALIDITY
If scipy (www.scipy.org) is available and -v is set, the sign-test probability is also reported when
carrying out approximate randomization. This probability can be compared with the reported probability
for "accuracy" to check the validity of the randomization method. Both probabilities should be similar
or should at least lead to similar conclusions; otherwise you might consider increasing the number of
shuffles with option -n. Another validity check is rerunning the randomization test and comparing the
results.
The test carried out by the two-sided paired sign-test is:
H0: The number of correct predictions from SYSTEM1 that are incorrectly predicted by SYSTEM2
equals the number of correct predictions from SYSTEM2 that are incorrectly predicted by
SYSTEM1. (Predictions that are correct or incorrect for both systems are ignored.)
H0 is rejected if the reported sign-test probability is lower than a predefined level.
TRAINING
Macro- and micro-averaging is carried out by taking the class counts from the input files. If not every class
from the original training file occurs in the input files to the same extend, then the reported averaged scores
may differ from the scores from Timbl.
This averaging difference can be solved by supplying the training file with the -T option. The same training file
should be used by both systems.
When the -c option is set, the format of supplied file should be the same as the input files (only class labels);
if -c is not set, the supplied training file should contain instances but without predicted class labels, only
the gold standards labels.