-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathcrosspower.quickCl.py
2145 lines (1854 loc) · 70.8 KB
/
crosspower.quickCl.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
#! /usr/bin/env python
"""
Name:
crosspower.py (branch quickCl)
Purpose:
calculate various theoretical angular power spectra for gravitational lensing
and galaxies
Uses:
pycamb (aka camb)
Modification History:
Written by Z Knight, 2017.07.31
Added dNdz interpolation; ZK, 2017.08.04
Added sum of dNdz curves in dNdz interp; ZK, 2017.08.08
Added gBias from Gian16 data; ZK, 2017.08.15
Fixed error of missing chi factors in winKappa and getCl; ZK, 2017.08.16
Modified functions so that cosmological parameters are **kwargs;
Consolodated some code into matterPower object;
Added nz as parameter to matterPower init;
Modified getCl, winKappa, winGalaxies for bias/amp params; ZK, 2017.08.27
Added plotRedshiftAppx; ZK, 2017.08.28
Added modelDNDZ; ZK, 2017.08.29
Modified winGalaxies and getCl to take modelDNDZ; ZK, 2017.08.30
Added byeBiasFit, getNormalizedDNDZbin; ZK, 2017.09.01
Added getWinKinterp, getNormalizedWinKbin to match DNDZ versions;
Reduced winKappa and winGalaxies parameter lists; ZK, 2017.09.06
Added bin option to winKappa; ZK, 2017.09.07
Added note on biases to winGalaxies; ZK, 2017.09.19
Modified winGalaxies to omit normalization if needed; ZK, 2017.09.27
Added z0 *= 1.0 to modelDNDZ to fix dividing by integer problem;
ZK, 2017.09.28
Added switch to use W^k as dNdz in winGalaxies; ZK, 2017.09.29
Modified getNormalizedDNDZbin and getNormalizedWinKbin to always
use the zs points given to them in normalization, and to call same
normBin routine; ZK, 2017.10.01
Modified plotModelDNDZbins and plotWinKbins to have normalized or
non-normalized plotting ability; ZK, 2017.10.02
Modified normBin to interpret redshift points as bin centers rather than
edges; ZK, 2017.10.04
Added overlapping redshift bin functionality (dndzMode1); ZK, 2017.10.06
Added bin smoothing with Gaussian (dndzMode2); ZK, 2017.10.10
Expanded cosmological parameter set for nuLambdaCDM; ZK, 2017.10.15
Reparameterized from H0 to cosmomc_theta; ZK, 2017.10.16
Added H0, Hs fields to matterPower; ZK, 2017.10.20
Added omega_n to omega_m sum in winKappa; ZK, 2017.10.23
Changed matter power interpolator zmax to zstar; ZK, 2017.10.24
Branched off of master. This version reverts to function getCl doing
rough approximation to integration; ZK, 2017.12.13
Fixed dz/dchi problem in winGalaxies; made matterPower default nz=10000;
renamed matterPower as MatterPower; ZK, 2017.12.14
Added AccuracyBoost and dark energy w to MatterPower class;
added interpOnly and zs parameters to winKappaBin and winGalaxies;
added getChiofZ and getHofZ to MatterPower;
created class Window; modified getCl to use Window object;
modified test functions to use new version of getCl;
ZK, 2017.12.17
Added biasByBin functionality to Window; ZK, 2017.12.18
Pulled myVar1, myVar2 up to MatterPower.__init__;
Added kludge to force binBKs (binAs) to all be 1; ZK, 2018.01.28
Added neutrino_hierarchy output, AccuracyBoost record; ZK, 2018.02.16
Pulled hunits parameter up to makePkinterp and MatterPower init;
Added modelDNDZ3, from Schaan ea 2016, which has 3 params;
Added beesBins to tophat; ZK, 2018.02.22
Added cosmomc_theta field to MatterPower class; ZK, 2018.02.26
Minor fix to tophat; ZK, 2018.03.05
Expanded dark energy parameterization to include w0,wa.
This necessitates the use of a modified version of the equations_ppf.f90
module in CAMB and additions to pycamb's model.CAMBparams.set_dark_energy;
ZK, 2018.03.19
Switched ordering of pars initialization in MatterPower.getPars to first
set_dark_energy, then set_cosmology; ZK, 2018.03.13
Modified getCl to calculate Cl at smaller set of ell values and
interpolate the rest; ZK, 2018.04.10
Added lensLmax and lpa parameters to MatterPower.getPars to control
lensing accuracy and range; ZK, 2018.04.13
"""
#import sys, platform, os
import numpy as np
import matplotlib.pyplot as plt
#import scipy.integrate as sint
from scipy.interpolate import interp1d
import camb # Note: this is now a modified version that includes w,wa
from camb import model, initialpower
from scipy import polyfit,poly1d
from scipy.interpolate import UnivariateSpline
################################################################################
# the matterPower class
class MatterPower:
"""
Purpose:
create and manipulate matter power spectrum objects
Description:
Data:
cosParams: the cosmological parameters for camb's set_cosmology
Default:
cosParams = {
'H0' : None, #67.51,
'cosmomc_theta' : 1.04087e-2,
'ombh2' : 0.02226,
'omch2' : 0.1193,
'omk' : 0,
'tau' : 0.063,
'mnu' : 0.06, #(eV)
'nnu' : 3.046,
'standard_neutrino_neff' : 3.046,
'num_massive_neutrinos' : 1,
'neutrino_hierarchy' : 'normal'}
ns: primordial running
r: something else primordial (tensor to scalar, probably)
nz: number of z points to collect z, chi, dz, dchi at between here and last scattering
Default: 10000
kmax: maximum k value to use in Pk
PK:
chistar:
chis:
dchi:
zs:
dzs:
pars:
H0: hubble constant in Mpc^-1 units
Hs: hubble parameter H(z) for all zs points
Methods:
__init__
getPars: gets camb parameters object
makePKinterp: make the Pk interpolator
getPKinterp: return the Pk interpolator and other related info
"""
def __init__(self,nz=10000,As=2.130e-9,ns=0.9653,r=0,kPivot=0.05,w=-1.0,wa=0.0,
nonlinear=True,AccuracyBoost=3,myVar1=model.Transfer_tot,
myVar2=model.Transfer_tot,hunits=False,**cos_kwargs):
"""
Inputs:
nz: the number of z points to use between here and last scattering surface
Important usage is as the number of points to use in approximation of
C_l integrals (endpoints will be dropped for that calculation)
As: "comoving curvature power at k=piveo_scalar"(sic)
ns: "scalar spectral index"
r: "tensor to scalar ratio at pivot"
kPivot: "pivot scale for power spectrum"
w: the w0 part of the dark energy p_de/rho_de in w = w0 + wa*( 1-a )
Default: -1.0
wa: the wa part of the dark energy p_de/rho_de in w = w0 + wa*( 1-a )
Default: 0.0
nonlinear: set to True to use CAMB's non-linear correction from halo model
AccuracyBoost: to pass to set_accuracy to set accuracy
Note that this sets accuracy globally, not just for this object
myVar1,myVar2: the variables to get power spectrum of
Default: model.Transfer_tot for delta_tot
hunits: set to True to use h-units for k, P(k)
Default: False
**cos_kwargs: the cosmological parameters for camb's set_cosmology
"""
# set cosmological parameters
self.cosParams = {
#'H0' : 67.51,
'H0' : None,
'cosmomc_theta' : 1.04087e-2,
'ombh2' : 0.02226,
'omch2' : 0.1193,
'omk' : 0,
'tau' : 0.063,
'mnu' : 0.06, # (eV)
'nnu' : 3.046,
'standard_neutrino_neff' : 3.046,
'num_massive_neutrinos' : 1,
'neutrino_hierarchy' : 'normal'}
self.updateParams(**cos_kwargs)
self.As = As
self.ns = ns
self.r = r
self.kPivot = kPivot
self.nonlinear=nonlinear
self.AccuracyBoost = AccuracyBoost
self.w = w
self.wa = wa
# check this
print 'neutrino_hierarchy = ',self.cosParams['neutrino_hierarchy']
# more parameters
self.nz = nz # this will actually be 2 more than the number of z points
self.kmax = 10
k_per_logint = None #100 # I really don't know what this will do
# make the PK interpolator (via camb)
self.makePKinterp(newPk=True,nz=nz,kmax=self.kmax,As=As,ns=ns,r=r,w=w,wa=wa,
kPivot=kPivot,k_per_logint=k_per_logint,nonlinear=nonlinear,
AccuracyBoost=AccuracyBoost,myVar1=myVar1,myVar2=myVar2,
hunits=hunits)
def updateParams(self,**cos_kwargs):
"""
modify object's cosParams variable
Inputs:
**cos_kwargs: dictionary of modifications to be passed to set_cosmology
"""
self.cosParams.update(cos_kwargs)
def getPars(self,As=2.130e-9,ns=0.9653,r=0,kPivot=0.05,w=-1.0,wa=0.0,
lmax=3000,lpa=5.0,AccuracyBoost=3,**cos_kwargs):
"""
Purpose:
quickly get camb parameters object
follows example code from http://camb.readthedocs.io/en/latest/CAMBdemo.html
but with slightly different parameters
Notes:
This method uses object's parameters by default, but these can be overriden.
This method does not overwrite object's stored parameters
Inputs:
To pass on to set_params:
As: "comoving curvature power at k=piveo_scalar"(sic)
ns: "scalar spectral index"
r: "tensor to scalar ratio at pivot"
kPivot: "pivot scale for power spectrum"
w: the w0 part of the dark energy p_de/rho_de in w = w0 + wa*( 1-a )
Warning: this is a global setting, not just per object
Default: -1.0
wa: the wa part of the dark energy p_de/rho_de in w = w0 + wa*( 1-a )
Warning: this is a global setting, not just per object
Default: 0.0
lmax: lmax setting for T,E,B to pass to set_for_lmax
Default: 3000
lpa: lens_potential_accuracy setting to pass to set_for_lmax
Default: 5.0
AccuracyBoost: to pass to set_accuracy to set accuracy
Warning: this is a global setting, not just per object
**cos_kwargs: keyword args to pass to set_cosmology
if not included, object defaults will be used
Returns:
the pars object
"""
# get cosmological parameters
cosParams = self.cosParams
cosParams.update(cos_kwargs)
#Set up a new set of parameters for CAMB
pars = camb.CAMBparams()
pars.set_dark_energy(w,wa)
pars.set_cosmology(**cosParams)
pars.InitPower.set_params(As=As,ns=ns,r=r,pivot_scalar=kPivot)
pars.set_for_lmax(lmax, lens_potential_accuracy=lpa)
pars.set_accuracy(AccuracyBoost=AccuracyBoost)
return pars
def makePKinterp(self,newPk=True,nz=10000,kmax=10,As=2.130e-9,ns=0.9653,r=0,
kPivot=0.05,w=-1.0,wa=0.0,myVar1=model.Transfer_tot,
myVar2=model.Transfer_tot,k_per_logint=None,
nonlinear=True,AccuracyBoost=3,hunits=False,**cos_kwargs):
"""
example code from http://camb.readthedocs.io/en/latest/CAMBdemo.html
(modified to have nz,kmax,myVar as inputs)
(and to have dzs as additional output)
Purpose:
For calculating large-scale structure and lensing results yourself, get a power spectrum
interpolation object. In this example we calculate the CMB lensing potential power
spectrum using the Limber approximation, using PK=camb.get_matter_power_interpolator() function.
calling PK(z, k) will then get power spectrum at any k and redshift z in range.
Inputs:
newPk: set to True to calculate new pars, Pk, etc. for object variables
nz: number of steps to use for the radial/redshift integration
(endpoints will be dropped in actual calculation)
kmax: kmax to use
As: "comoving curvature power at k=piveo_scalar"(sic)
ns: "scalar spectral index"
r: "tensor to scalar ratio at pivot"
kPivot: "pivot scale for power spectrum"
w: the w0 part of the dark energy p_de/rho_de in w = w0 + wa*( 1-a )
Default: -1.0
wa: the wa part of the dark energy p_de/rho_de in w = w0 + wa*( 1-a )
Default: 0.0
myVar1,myVar2: the variables to get power spectrum of
Default: model.Transfer_tot for delta_tot
k_per_logint=None: to pass to get_matter_power_interpolater
which passes it to set_matter_power
nonlinear: set to True to use CAMB's non-linear correction from halo model
Default: True
AccuracyBoost: to pass to set_accuracy to set accuracy
Note that this sets accuracy globally, not just for this object
hunits: set to True to use h-units for k, P(k)
Default: False
**cos_kwargs: keyword args to pass to getPars and set_cosmology
if not used, getPars will use object defaults
Outputs:
sets various object variables
"""
if newPk:
self.updateParams(**cos_kwargs)
self.pars = self.getPars(As=As,ns=ns,r=r,kPivot=kPivot,w=w,wa=wa,
AccuracyBoost=AccuracyBoost)
#For Limber result, want integration over \chi (comoving radial distance), from 0 to chi_*.
#so get background results to find chistar, set up arrage in chi, and calculate corresponding redshifts
results= camb.get_background(self.pars)
self.chistar = results.conformal_time(0)- model.tau_maxvis.value
self.chis = np.linspace(0,self.chistar,nz)
self.zs=results.redshift_at_comoving_radial_distance(self.chis)
self.zstar = self.zs[-1]
#Calculate array of delta_chi, and drop first and last points where things go singular
self.dchis = (self.chis[2:]-self.chis[:-2])/2 #overkill since chis are evenly spaced
self.dzs = ( self.zs[2:]- self.zs[:-2])/2 #not as nice as with chi since zs not evenly spaced
self.chis = self.chis[1:-1]
self.zs = self.zs[1:-1]
print 'zs.size: ',self.zs.size
#Get the matter power spectrum interpolation object (based on RectBivariateSpline).
#return_z_k = True
return_z_k = False
if return_z_k:
self.PK, self.zArray, self.kArray = \
camb.get_matter_power_interpolator(self.pars, nonlinear=nonlinear,
hubble_units=hunits, k_hunit=hunits, kmax=kmax,k_per_logint=k_per_logint,
var1=myVar1,var2=myVar2, zmax=self.zstar, return_z_k=return_z_k)
else:
self.PK = camb.get_matter_power_interpolator(self.pars, nonlinear=nonlinear,
hubble_units=hunits, k_hunit=hunits, kmax=kmax,k_per_logint=k_per_logint,
var1=myVar1,var2=myVar2, zmax=self.zstar)
#Get H(z) values (in Mpc^-1 units)
#print 'calculating H(z) at each z...'
self.Hs = np.empty(nz-2)
self.H0 = results.h_of_z(0)
for zIndex, z in enumerate(self.zs):
self.Hs[zIndex] = results.h_of_z(z)
self.Hstar = results.h_of_z(self.zstar)
self.cosmomc_theta = results.cosmomc_theta()
def getPKinterp(self):
"""
Purpose:
Just returns some values from object
Returns:
the PK(z,k) interpolator
chistar (chi of last scattering surface)
chis (array of chi values) (actually appears to be conformal time)
dchis (delta chi array)
zs (redshift array)
dzs (delta z array)
pars (CAMB parameters)
"""
return self.PK,self.chistar,self.chis,self.dchis,self.zs,self.dzs,self.pars
def getChiofZ(self,kind='quadratic'):
"""
get a chi(z) interpolator
inputs:
kind: kind of interpolation
returns:
function chi(z)
"""
zs = np.hstack(( [0],self.zs, self.zstar))
chis = np.hstack(([0],self.chis,self.chistar))
#print 'Chi(z) zmax: ',zs[-1]
return interp1d(zs,chis,kind=kind)
def getHofZ(self,kind='quadratic'):
"""
get an H(z) interpolator
inputs:
kind: kind of interpolation
returns:
function H(z)
"""
zs = np.hstack(( [0],self.zs,self.zstar))
Hs = np.hstack((self.H0,self.Hs,self.Hstar))
return interp1d(zs,Hs,kind=kind)
################################################################################
# window functions
def getDNDZ(binNum=1,BPZ=True):
"""
function to load dN/dz(z) data points from file
Note:
data was digitized from figure 3 of Crocce et al, 2016
data files should be in directory dNdz
Inputs:
binNum: integer in {1,2,3,4,5}.
each corresponds to a redshift bin of width Delta.z=0.2:
0.2-0.4, 0.4-0.6, 0.6-0.8, 0.8-1.0, 1.0-1.2, 1.2-1.4
BPZ: selects which photo-z result to use
True: use BPZ result
False: use TPZ result
Returns:
z, dNdz(z)
"""
myDir = 'dNdz/'
if binNum == 1:
if BPZ:
filename = 'BPZ 1 - two to four.csv'
else:
filename = 'TPZ 1 - two to four.csv'
elif binNum == 2:
if BPZ:
filename = 'BPZ 2 - four to six.csv'
else:
filename = 'TPZ 2 - four to six.csv'
elif binNum == 3:
if BPZ:
filename = 'BPZ 3 - six to eight.csv'
else:
filename = 'TPZ 3 - six to eight.csv'
elif binNum == 4:
if BPZ:
filename = 'BPZ 4 - eight to ten.csv'
else:
filename = 'TPZ 4 - eight to ten.csv'
elif binNum == 5:
if BPZ:
filename = 'BPZ 5 - ten to twelve.csv'
else:
filename = 'TPZ 5 - ten to twelve.csv'
else:
print 'wtf. c\'mon.'
return 0,0
myFileName = myDir+filename
#print myFileName
z, dNdz = np.loadtxt(myFileName,unpack=True,delimiter=', ')
return z, dNdz
def extendZrange(zmin,zmax,nBins,binSmooth):
"""
Function to extend range of zs, nBins
Inputs:
zmin,zmax:
nBins:
binSmooth:
Returns:
extraZ, extraBins
"""
if binSmooth !=0:
extraZ = 4*binSmooth
binSize = (zmax-zmin)/(nBins*1.0)
extraBins = np.int_(np.ceil(extraZ/binSize))
extraZ = extraBins*binSize
else:
extraBins = 0
extraZ = 0
return extraZ, extraBins
def getDNDZinterp(binNum=1,BPZ=True,zmin=0.0,zmax=4.0,nZvals=100,dndzMode=2,
z0=1.5,nBins=10,binSmooth=0):
"""
Purpose:
get interpolator for dNdz(z) data points
Inputs:
binNum: integer in {1,2,3,4,5}. (in dndzMode 1)
each corresponds to a redshift bin of width Delta.z=0.2:
0.2-0.4, 0.4-0.6, 0.6-0.8, 0.8-1.0, 1.0-1.2, 1.2-1.4;
except for binNum=0, which indicates a sum of all other bins
In dndzMode 2: any integer in {1,2,...,nBins}
BPZ: selects which photo-z result to use in dndzMode 1
True: use BPZ result
False: use TPZ result
zmin,zmax: the lowest,highest redshift points in the interpolation
if dndzMode == 1: zmin will be 0, zmax will be 1.5, to match dndz files,
and these points will be added to zs with dndz=0
nZvals: number of z values to use for interpolating sum of all curves
when binNum=0 (for sum of all) is selected (dndzMode 1)
and number of z values per bin when evaluating model DNDZ (dndzMode 2)
dndzMode: indicate which method to use to select dNdz curves
1: use observational DES-SV data from Crocce et al
2: use LSST model distribution, divided into rectagular bins (Default)
Parameters used only in dndzMode = 2:
z0: controls width of distribution
zmax: maximum redshift in range of bins
nBins: number of bins to divide range into
binSmooth: controls smoothing of tophat-stamped bins (unless binNum == 0)
Default: 0 (no smoothing)
Returns:
interpolation function dNdz(z)
should cover slightly larger region than bin of interest for full interpolation
"""
if dndzMode == 1:
zmin = 0.0
zmax = 1.5 # match dndz files
zs = np.linspace(zmin,zmax,nZvals)
if binNum==0:
myDNDZvals = np.zeros(nZvals)
for bN in range(1,6): #can not include 0 in this range
myFunc = getDNDZinterp(binNum=bN,BPZ=BPZ,zmin=zmin,zmax=zmax,dndzMode=1)
myDNDZvals += myFunc(zs)
myDNDZ = myDNDZvals
else:
zs,myDNDZ = getDNDZ(binNum=binNum,BPZ=BPZ)
zs = np.concatenate([[zmin],zs,[zmax]])
myDNDZ = np.concatenate([[0.],myDNDZ,[0.] ])
elif dndzMode == 2:
# do nZvals*nBins+1 to get bin ends in set
zs = np.linspace(zmin,zmax,nZvals*nBins+1)
#zs = np.linspace(zmin,zmax,nZvals)
#myDNDZ = modelDNDZ(zs,z0)
myDNDZ = modelDNDZ3(zs,z0)
myDNDZ = tophat(myDNDZ,zs,zmin,zmax,nBins,binNum)
#binEdges =np.linspace(zmin,zmax,nBins+1)
#print 'edges: ',binEdges[binNum-1],binEdges[binNum]
#print 'dndz nozero at z = : ',zs[np.where(myDNDZ != 0)]
else:
print 'covfefe!'
return 0
# do smoothing
if binSmooth != 0 and dndzMode == 2:
myDNDZ = gSmooth(zs,myDNDZ,binSmooth)
return interp1d(zs,myDNDZ,assume_sorted=True,kind='slinear')#'quadratic')
def getDNDZratio(binNum=1,BPZ=True,zmin=0.0,zmax=1.5,nZvals=100):
"""
Purpose:
get interpolator for ratio of (dNdz)_i/(dNdz)_tot data points
Inputs:
binNum: integer in {1,2,3,4,5}.
each corresponds to a redshift bin of width Delta.z=0.2:
0.2-0.4, 0.4-0.6, 0.6-0.8, 0.8-1.0, 1.0-1.2, 1.2-1.4;
BPZ: selects which photo-z result to use
True: use BPZ result
False: use TPZ result
zmin,zmax: the lowest,highest redshift points in the interpolation
nZvals: number of z values to use for interpolating sum of all curves
when binNum=0 (for sum of all) is selected
Returns:
interpolation function dNdz(z)
"""
numeratorInterp = getDNDZinterp(binNum=binNum,BPZ=BPZ,zmin=zmin,zmax=zmax,
nZvals=nZvals,dndzMode=1)
denominatorInterp = getDNDZinterp(binNum=0,BPZ=BPZ,zmin=zmin,zmax=zmax,
nZvals=nZvals,dndzMode=1)
myZvals = np.linspace(zmin,zmax,nZvals)
numVals = numeratorInterp(myZvals)
denVals = denominatorInterp(myZvals)
# what if denom goes to zero? does it ever? prob. only at endpoints zmin, zmax
#print 'denVals == 0 at: ',np.where(denVals == 0)
# kludge endpoints
denVals[0] = denVals[1]
denVals[-1] = denVals[-2]
ratioVals = numVals/denVals
return interp1d(myZvals,ratioVals,assume_sorted=True,kind='slinear')#'quadratic')
def modelDNDZ(z,z0):
"""
Purpose:
Implement a 1 parameter analytic function that can be used to
approximate galaxy distributions of future galaxy surveys
Inputs:
z: the redshift(s) at which dN/dz is evaluated
z0: the control of the width and extent of the distribution
note: LSST science book uses z0 = 0.3
Returns:
magnitude of dN/dz at redshift z
"""
# cast z0 as float
z0 *= 1.0
zRatio = z/z0
return 1/(2*z0) * (zRatio)**2 * np.exp(-1*zRatio)
def modelDNDZ3(z,z0=0.5,alpha=1.27,beta=1.02):
"""
Purpose:
Implement a 3-parameter analytic function that can be used to
approximate galaxy distributions of future galaxy surveys
Notes:
This is the form used by Schaan et. al. 2016
Params for LSST: alpha=1.27,beta=1.02,z0=0.5
Params for Euclid: alpha=1.3, beta=1.5, z0=0.65
Params for WFIRST: alpha=1.27,beta=1.02,z0=0.6
Inputs:
z: the redshift(s) at which dN/dz is evaluated
z0: the control of the width and extent of the distribution
alpha,beta: exponents in formula controlling shape of function
Returns:
magnitude of dN/dz at redshift z
"""
# cast z0 as float
z0 *= 1.0
zRatio = z/z0
return z**alpha * np.exp(-1* zRatio**beta)
#def tophat(FofZ,zs,zmin,zmax,nBins,binNum,includeEdges=False,beesBins=False):
def tophat(FofZ,zs,zmin,zmax,nBins,binNum,includeEdges=False,beesBins=True):
"""
Purpose:
multiply function by tophat to get a slice of the function
Inputs:
FofZ: array of a function evaluated at a set of redshifts
zs: redshift(s) at which FofZ was evaluated
zmin,zmax: minimum,maximum z to use when slicing FofZ into sections
nBins: number of evenly spaced sections to divide (0,zmax) into
binNum: which bin to return. bin starting at z=zmin is number 1,
bin ending at z=zmax is bin number nBins
Must have 0 < binNum <= nBins,
includeEdges: set to True to add one more myPk.zs point outside of bin range
on each end of bin in set of nonzero values returned;
useful for interpolation over entire bin if bin edges are not included in zs
Default: False
beesBins: a hack to make non-equally sized bins, to match Byeonghee's bins.
If true, bin edges will be 0,0.5,1,2,3,4,7.
nBins must be 6 or an error will be raised and 0 will be returned.
Default: False
Returns:
Tophat slice of FofZ function
"""
myFofZ = FofZ.copy()
if beesBins:
if nBins == 6:
binEdges = [0.0,0.5,1.0,2.0,3.0,4.0,7.0]
else:
print '!!!!! wrong nBins for beesBins detected !!!!!'
return 0
else:
binEdges = np.linspace(zmin,zmax,nBins+1)
if binNum != 0:
#myDNDZ[np.where(z< binEdges[binNum-1])] = 0
#myDNDZ[np.where(z>=binEdges[binNum ])] = 0
indicesBelowBin = np.where(zs< binEdges[binNum-1])
indicesAboveBin = np.where(zs> binEdges[binNum ])
#print '\n before attempting overlap:'
#print 'bin edges: ',binEdges
#print 'below bin: ',zs[indicesBelowBin]
#print 'above bin: ',zs[indicesAboveBin]
if includeEdges: # this is here to allow interp over entire bin
indicesBelowBin = indicesBelowBin[0][:-1]
indicesAboveBin = indicesAboveBin[0][1:]
myFofZ[indicesBelowBin] = 0
myFofZ[indicesAboveBin] = 0
#print '\n after attempting overlap:'
#print 'bin edges: ',binEdges
#print 'below bin: ',zs[indicesBelowBin]
#print 'above bin: ',zs[indicesAboveBin]
return myFofZ
def normBin(FofZ,binZmin,binZmax,zs,normPoints,verbose=False):
"""
Purpose:
find normalization factor for a function with a redshift bin
Note:
Left bin edges are used for bin height when normalizing
Inputs:
FofZ: a function of redshift, z, to be normalized
Note: this can only be a function of one variable: z
binZmin, binZmax: redshift min, max for normalization range
zs: the points that redshift should eventually be evaluated at
normPoints: number of points between each point in zs to be used in calculation
verbose=False: set to true to have more output
Returns:
normalization factor
"""
# select redshift points to work with
myZs = zs[np.where(np.logical_and( zs>binZmin, zs<binZmax ))]
myZs = np.append([binZmin],myZs)
myZs = np.append(myZs,[binZmax])
if verbose: print '\nmyZs: ',myZs
# add in normPoints number of points in between each point in myZs
if normPoints == 0:
manyZs = myZs
else:
stepSize = 1./(normPoints+1) # +1 for endpoint
xp = np.arange(myZs.size) # x coords for interp
if verbose: print myZs.size,stepSize,myZs.size-stepSize
targets = np.arange(0,myZs.size-1+stepSize,stepSize) # y coords for interp
if verbose: print 'targets: ',targets
manyZs = np.interp(targets,xp,myZs)
if verbose: print 'manyZs: ',manyZs
# get interval widths: spacing between
deltaZs = manyZs-np.roll(manyZs,1)
# ditch endpoint residuals
if binZmin in zs:
deltaZs = deltaZs[1:]
else:
deltaZs = deltaZs[2:]
if binZmax not in zs:
deltaZs = deltaZs[:-1]
if verbose: print 'deltaZs: ',deltaZs
# evaluate FofZ
myF = FofZ(manyZs)
# interpret redshifts as left bin edges; ditch final F(z)
myF = myF[:-1]
if binZmin not in zs:
myF = myF[1:]
if binZmax not in zs:
myF = myF[:-1]
# interpret redshifts as bin centers; ditch first and last z
#myF = myF[1:-1]
if verbose: print 'myF: ',myF
# area under curve
area = np.dot(myF,deltaZs)
return 1./area
def getNormalizedDNDZbin(binNum,zs,z0,zmax,nBins,BPZ=True,dndzMode=2,
zmin=0.0,normPoints=1000,binSmooth=0,verbose=False):
"""
Purpose:
return normalized dndz array
Note:
Left bin edges are used for bin height when normalizing
Inputs:
binNum: index indicating which redshift distribution to use
{1,2,...,nBins}
zs: the redshifts to calculate DNDZ at
redshift values outside range (zmin,zmax) will return zero
dndzMode: indicate which method to use to select dNdz curves
1: use observational DES-SV data from Crocce et al
2: use LSST model distribution, divided into rectagular bins (Default)
Parameters used only in dndzMode = 2:
z0: controls width of distribution
zmax: maximum redshift in range of bins
nBins: number of bins to divide range into
binSmooth: controls smoothing of tophat-stamped bins (unless binNum == 0)
Default: 0 (no smoothing)
BPZ=True: controls which dndz curves to use in dndzMode = 1
zmin=0.0: minimum redshift in range of bins
normPoints=1000: number of points per zs interval to use when normalizing
verbose: control ammount of output from normBin
Returns:
Normalized array of dNdz values within the bin, corresponding to redshifts zs
"""
# check binNum
if binNum < 0 or binNum > nBins:
print 'die screaming'
return 0
# get dNdz
if dndzMode == 1:
zmax = 1.5 # hard coded to match domain in dndz files
rawDNDZ = getDNDZinterp(binNum=binNum,BPZ=BPZ,zmin=zmin,zmax=zmax,dndzMode=1)
binZmin = zmin
binZmax = zmax
elif dndzMode == 2:
# extend Z range for smoothing
extraZ,extraBins = extendZrange(zmin,zmax,nBins,binSmooth)
zmax += extraZ
nBins += extraBins
rawDNDZ = getDNDZinterp(binNum=binNum,BPZ=BPZ,zmin=zmin,zmax=zmax,dndzMode=2,
binSmooth=binSmooth,z0=z0,nBins=nBins)
# divide redshift range into bins and select desired bin
binEdges = np.linspace(zmin,zmax,nBins+1)
binZmin = binEdges[binNum-1] # first bin starting at zmin has binNum=1
binZmax = binEdges[binNum]
if binNum == 0:
binZmin = zmin
binZmax = zmax
if binSmooth != 0:
# adjust binZmin, binZmax to appropriate cutoff for smoothed bin
numSigmas = 4 # should match what is in gSmooth function
binZmin -= numSigmas*binSmooth
binZmax += numSigmas*binSmooth
if binZmin < 0:
binZmin = 0
else: # really, it should be dndzMode = 1 or 2
print 'covfefe!'
return 0
# select bin indices
binIndices = np.where(np.logical_and( zs>=binZmin, zs<=binZmax ))
# get normalization factor
normFac = normBin(rawDNDZ,binZmin,binZmax,zs[binIndices],normPoints,
verbose=verbose)
# get non-normalized DNDZ
binDNDZ = rawDNDZ(zs[binIndices])
myDNDZ = np.zeros(zs.size)
myDNDZ[binIndices] = binDNDZ
if verbose:
myZs = zs[np.where(np.logical_and( zs>=binZmin, zs<binZmax ))]
print 'zs in getNormalizedDNDZbin: ',myZs
# normalize
normDNDZ = normFac*myDNDZ
return normDNDZ
def gBias():
"""
Purpose:
function that returns data points with errors
Data:
galaxy biases from Giannantonio et al 2016, table 2, gal-gal, real space
Returns:
three triples: redshifts,biases,bias errors
"""
zs = (0.3, 0.5, 0.7, 0.9, 1.1) #redshifts (just bin centers; could be improved)
bs = (1.03,1.28,1.32,1.57,1.95) #galaxy biases
es = (0.06,0.04,0.03,0.03,0.04) #bias errors
return zs,bs,es
def getBiasFit(deg=1):
"""
Purpose:
polynomial fit to bias data
Inputs:
deg: degree of polynomial to fit to bias data
This really should be 1.
Returns:
function to evaluate bias b(z)
"""
zs,bs,es = gBias()
coefs = polyfit(zs,bs,deg=deg,w=1/np.array(es))
return poly1d(coefs)
def byeBiasFit():
"""
Purpose:
polynomial fit to bias from simulations
Note:
Byeonghee Yu appears to have taken this formula from digitizing a plot
in Weinberg ea 2004 (fig. 9?)
Returns:
a function that can evaluate bias at redshift z, or redshifts z if an array is passed
"""
return lambda z: 1+0.84*z
def gaussian(x,mu,sig):
"""
Purpose:
evaluate Gaussian function at input points
Inputs:
x: numpy array of points to evaluate gaussian at
mu: the mean
sig: the standard deviation
Returns:
the value of the Gaussian at each point in x
"""
return 1./(np.sqrt(2.*np.pi)*sig) * np.exp(-np.power((x-mu)/sig, 2)/2.)
def gSmooth(zs,FofZ,binSmooth,numSigmas=3):
"""
Purpose:
smooth a tophat-stamped bin by convolution with a Gaussian
Inputs:
zs: the redshift points
FofZ: a function evaluated at these points
binSmooth: the standard deviation of the smoothing Gaussian
numSigmas: how many std devs from edge of old bin new bin goes
Default: 3
Returns:
A smoothed FofZ array
"""
# exted array below zero to reflect power back into positive
reflectIndices = np.where(zs < binSmooth*(numSigmas+1))[0]
reflectZs = np.flip(zs[reflectIndices],0)*-1
reflectFs = np.zeros(np.size(reflectIndices))
szzs = np.hstack((reflectZs,zs))
ZfoFFofZ = np.hstack((reflectFs,FofZ))
#szzs = zs
#ZfoFFofZ = FofZ
# find deltaZs
# for gaussian function, use redshift points as pixel centers
deltaZs = (np.roll(szzs,-1)-np.roll(szzs,1))/2
# find points within bin
# this is where the tophatness of the bin is used
binIndices = np.where(ZfoFFofZ != 0)[0]
# new array to put results into
smoothed = np.zeros(ZfoFFofZ.size)
for binI in binIndices:
lowZ = szzs[binI]-numSigmas*binSmooth
midZ = szzs[binI]
highZ = szzs[binI]+numSigmas*binSmooth
myIndices = np.where(np.logical_and(szzs>=lowZ,szzs<=highZ))[0]
myZs = szzs[myIndices]
myGs = gaussian(myZs,midZ,binSmooth)
smoothed[myIndices] += myGs*ZfoFFofZ[binI]*deltaZs[binI]
# do the reflection
refl = np.flip(smoothed[reflectIndices],0)
smoothed = smoothed[reflectIndices.size:]
smoothed[reflectIndices] += refl
return smoothed
def winGalaxies(myPk,biases=None,BPZ=True,dndzMode=2,
binNum=0,zmin=0.0,zmax=4.0,nBins=10,z0=0.3,
doNorm=True,useWk=False,binSmooth=0,
interpOnly=False,zs=None):
"""
window function for galaxy distribution
Inputs:
myPk: MatterPower object
biases: array of galaxy biases * matter biases (b*A)
corresponding to zs values
default: all equal to 1
BPZ: flag to indicate BPZ or TPZ when dndzMode=1; True for BPZ
binNum: index indicating which redshift distribution to use
{1,2,...,nBins}
if binNum == 0, sum of all bins will be used,
binSmooth will be set to 0, and doNorm to False
Default: 0
dndzMode: indicate which method to use to select dNdz curves
1: use observational DES-SV data from Crocce et al
2: use LSST model distribution, divided into bins (Default)
doNorm: set to True to normalize dN/dz
Default: True
Parameters only used in dndzMode = 2:
nBins: number of bins to create
Default: 10
z0: controls width of full dNdz distribution
Default: 0.3 (LSST-like)
zmin,zmax: the min,max redshift to use
Defaults: 0,4
useWk: set to True to use W^kappa as dN/dz
Defalut: False
binSmooth: controls bin smoothing
Default: 0 (no smoothing)
interpOlny: set to True to return interpolation function.
otherwise, array evaluated at zs will be returned.
Default: False
zs: optional array of z values to evaluate winKappa at.
If not specified or is None, myPk.zs will be used.
Returns:
array of W^galaxies values evaluated at myPk.getChiofZ values
"""
# get redshift array, etc.
#PK,chistar,pkChis,dchis,pkZs,dzs,pars = myPk.getPKinterp()
if zs is None:
zs = myPk.zs