Macroeconomic cycles and asset class returns #
This notebook offers the necessary code to replicate the research findings discussed in Macrosynergy’s post “Macroeconomic cycles and asset class returns” . Its primary objective is to inspire readers to explore and conduct additional investigations while also providing a foundation for testing their own unique ideas.
Get packages and JPMaQS data #
This notebook primarily relies on the standard packages available in the Python data science stack. However, there is an additional package
macrosynergy
that is required for two purposes:
-
Downloading JPMaQS data: The
macrosynergy
package facilitates the retrieval of JPMaQS data, which is used in the notebook. -
For the analysis of quantamental data and value propositions: The
macrosynergy
package provides functionality for performing quick analyses of quantamental data and exploring value propositions.
For detailed information and a comprehensive understanding of the
macrosynergy
package and its functionalities, please refer to the
“Introduction to Macrosynergy package”
on the Macrosynergy Academy or visit the following link on
Kaggle
.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import macrosynergy.management as msm
import macrosynergy.panel as msp
import macrosynergy.signal as mss
import macrosynergy.pnl as msn
from macrosynergy.download import JPMaQSDownload
from datetime import timedelta, date, datetime
from itertools import combinations
import warnings
import os
warnings.simplefilter("ignore")
The JPMaQS indicators we consider are downloaded using the J.P. Morgan Dataquery API interface within the
macrosynergy
package. This is done by specifying ticker strings, formed by appending an indicator category code
DB(JPMAQS,<cross_section>_<category>,<info>)
, where
value
giving the latest available values for the indicator
eop_lag
referring to days elapsed since the end of the observation period
mop_lag
referring to the number of days elapsed since the mean observation period
grade
denoting a grade of the observation, giving a metric of real time information quality.
After instantiating the
JPMaQSDownload
class within the
macrosynergy.download
module, one can use the
download(tickers,start_date,metrics)
method to easily download the necessary data, where
tickers
is an array of ticker strings,
start_date
is the first collection date to be considered and
metrics
is an array comprising the times series information to be downloaded. For more information see
here
.
To ensure reproducibility, only samples between January 2000 (inclusive) and May 2023 (exclusive) are considered.
# General cross-sections lists
cids_g3 = ["EUR", "JPY", "USD"] # DM large currency areas
cids_dmsc = ["AUD", "CAD", "CHF", "GBP", "NOK", "NZD", "SEK"] # DM small currency areas
cids_latm = ["BRL", "COP", "CLP", "MXN", "PEN"] # Latam
cids_emea = ["CZK", "HUF", "ILS", "PLN", "RON", "RUB", "TRY", "ZAR"] # EMEA
cids_emas = ["IDR", "INR", "KRW", "MYR", "PHP", "SGD", "THB", "TWD"] # EM Asia ex China
cids_dm = cids_g3 + cids_dmsc
cids_em = cids_latm + cids_emea + cids_emas
cids = cids_dm + cids_em
cids_nomp = ["COP", "IDR", "INR"] # countries that have no employment growth data
cids_mp = list(set(cids) - set(cids_nomp))
# Equity cross-sections lists
cids_dmeq = ["EUR", "JPY", "USD"] + ["AUD", "CAD", "CHF", "GBP", "SEK"]
cids_emeq = ["BRL", "INR", "KRW", "MXN", "MYR", "SGD", "THB", "TRY", "TWD", "ZAR"]
cids_eq = cids_dmeq + cids_emeq
# FX cross-sections lists
cids_nofx = ["EUR", "USD", "SGD"]
cids_fx = list(set(cids) - set(cids_nofx))
cids_dmfx = set(cids_dm).intersection(cids_fx)
cids_emfx = set(cids_em).intersection(cids_fx)
cids_eur = ["CHF", "CZK", "HUF", "NOK", "PLN", "RON", "SEK"] # trading against EUR
cids_eud = ["GBP", "RUB", "TRY"] # trading against EUR and USD
cids_usd = list(set(cids_fx) - set(cids_eur + cids_eud)) # trading against USD
# IRS cross-section lists
cids_dmsc_du = ["AUD", "CAD", "CHF", "GBP", "NOK", "NZD", "SEK"]
cids_latm_du = ["CLP", "COP", "MXN"] # Latam
cids_emea_du = [
"CZK",
"HUF",
"ILS",
"PLN",
"RON",
"RUB",
"TRY",
"ZAR",
] # EMEA
cids_emas_du = ["CNY", "HKD", "IDR", "INR", "KRW", "MYR", "SGD", "THB", "TWD"]
cids_dmdu = cids_g3 + cids_dmsc_du
cids_emdu = cids_latm_du + cids_emea_du + cids_emas_du
cids_du = cids_dmdu + cids_emdu
JPMaQS indicators are conveniently grouped into 6 main categories: Economic Trends, Macroeconomic balance sheets, Financial conditions, Shocks and risk measures, Stylyzed trading factors, and Generic returns. Each indicator has a separate page with notes, description, availability, statistical measures, and timelines for main currencies. The description of each JPMaQS category is available either under Macro Quantamental Academy , JPMorgan Markets (password protected). In particular, the indicators used in this notebook could be found under Labor market dynamics , Demographic trends , Consumer price inflation trends , Intuitive growth estimates , Long-term GDP growth , Private credit expansion , Equity index future returns , FX forward returns , and Duration returns .
# Category tickers
main = [
"EMPL_NSA_P1M1ML12_3MMA",
"EMPL_NSA_P1Q1QL4",
"WFORCE_NSA_P1Y1YL1_5YMM",
"WFORCE_NSA_P1Q1QL4_20QMM",
"UNEMPLRATE_NSA_3MMA_D1M1ML12",
"UNEMPLRATE_NSA_D1Q1QL4",
"UNEMPLRATE_SA_D1Q1QL4", # potentially NZD only
"UNEMPLRATE_SA_D3M3ML3",
"UNEMPLRATE_SA_D1Q1QL1",
"UNEMPLRATE_SA_3MMA",
"UNEMPLRATE_SA_3MMAv10YMM",
"CPIH_SA_P1M1ML12",
"CPIH_SJA_P6M6ML6AR",
"CPIC_SA_P1M1ML12",
"CPIC_SJA_P6M6ML6AR",
"INFTEFF_NSA",
"INTRGDPv5Y_NSA_P1M1ML12_3MMA",
"RGDP_SA_P1Q1QL4_20QMM",
"PCREDITBN_SJA_P1M1ML12",
]
xtra = ["GB10YXR_NSA"]
rets = [
"EQXR_NSA",
"EQXR_VT10",
"FXTARGETED_NSA",
"FXUNTRADABLE_NSA",
"FXXR_NSA",
"FXXR_VT10",
"FXXRHvGDRB_NSA",
"DU02YXR_NSA",
"DU02YXR_VT10",
"DU05YXR_VT10",
]
xcats = main + rets + xtra
# Download series from J.P. Morgan DataQuery by tickers
start_date = "2000-01-01"
end_date = "2023-05-01"
tickers = [cid + "_" + xcat for cid in cids for xcat in xcats]
print(f"Maximum number of tickers is {len(tickers)}")
# Retrieve credentials
client_id: str = os.getenv("DQ_CLIENT_ID")
client_secret: str = os.getenv("DQ_CLIENT_SECRET")
with JPMaQSDownload(client_id=client_id, client_secret=client_secret) as dq:
df = dq.download(
tickers=tickers,
start_date=start_date,
end_date=end_date,
suppress_warning=True,
metrics=["value"],
report_time_taken=True,
show_progress=True,
)
Maximum number of tickers is 930
Downloading data from JPMaQS.
Timestamp UTC: 2024-03-27 10:54:54
Connection successful!
Requesting data: 100%|██████████| 47/47 [00:09<00:00, 4.89it/s]
Downloading data: 62%|██████▏ | 29/47 [00:09<00:08, 2.06it/s]
Downloading data: 100%|██████████| 47/47 [00:13<00:00, 3.41it/s]
Time taken to download data: 25.68 seconds.
Some expressions are missing from the downloaded data. Check logger output for complete list.
232 out of 930 expressions are missing. To download the catalogue of all available expressions and filter the unavailable expressions, set `get_catalogue=True` in the call to `JPMaQSDownload.download()`.
display(df["xcat"].unique())
display(df["cid"].unique())
df["ticker"] = df["cid"] + "_" + df["xcat"]
df.head(3)
array(['CPIC_SA_P1M1ML12', 'CPIC_SJA_P6M6ML6AR', 'CPIH_SA_P1M1ML12',
'CPIH_SJA_P6M6ML6AR', 'EMPL_NSA_P1M1ML12_3MMA', 'FXTARGETED_NSA',
'FXUNTRADABLE_NSA', 'FXXRHvGDRB_NSA', 'FXXR_NSA', 'FXXR_VT10',
'INFTEFF_NSA', 'INTRGDPv5Y_NSA_P1M1ML12_3MMA',
'PCREDITBN_SJA_P1M1ML12', 'RGDP_SA_P1Q1QL4_20QMM',
'UNEMPLRATE_NSA_3MMA_D1M1ML12', 'UNEMPLRATE_SA_3MMA',
'UNEMPLRATE_SA_D3M3ML3', 'WFORCE_NSA_P1Y1YL1_5YMM', 'DU02YXR_NSA',
'DU02YXR_VT10', 'DU05YXR_VT10', 'EQXR_NSA', 'EQXR_VT10',
'EMPL_NSA_P1Q1QL4', 'UNEMPLRATE_SA_3MMAv10YMM',
'UNEMPLRATE_NSA_D1Q1QL4', 'WFORCE_NSA_P1Q1QL4_20QMM',
'UNEMPLRATE_SA_D1Q1QL1', 'GB10YXR_NSA'], dtype=object)
array(['AUD', 'BRL', 'CAD', 'CHF', 'CLP', 'COP', 'CZK', 'EUR', 'GBP',
'HUF', 'IDR', 'ILS', 'INR', 'JPY', 'KRW', 'MXN', 'MYR', 'NOK',
'NZD', 'PEN', 'PHP', 'PLN', 'RON', 'RUB', 'SEK', 'SGD', 'THB',
'TRY', 'TWD', 'USD', 'ZAR'], dtype=object)
real_date | cid | xcat | value | ticker | |
---|---|---|---|---|---|
0 | 2000-01-03 | AUD | CPIC_SA_P1M1ML12 | 1.244168 | AUD_CPIC_SA_P1M1ML12 |
1 | 2000-01-03 | AUD | CPIC_SJA_P6M6ML6AR | 1.428580 | AUD_CPIC_SJA_P6M6ML6AR |
2 | 2000-01-03 | AUD | CPIH_SA_P1M1ML12 | 1.647446 | AUD_CPIH_SA_P1M1ML12 |
scols = ["cid", "xcat", "real_date", "value"] # required columns
dfx = df[scols].copy()
dfx.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4005459 entries, 0 to 4005458
Data columns (total 4 columns):
# Column Dtype
--- ------ -----
0 cid object
1 xcat object
2 real_date datetime64[ns]
3 value float64
dtypes: datetime64[ns](1), float64(1), object(2)
memory usage: 122.2+ MB
Blacklist dictionaries #
Identifying and isolating periods of official exchange rate targets, illiquidity, or convertibility-related distortions in FX markets is the first step in creating an FX trading strategy. These periods can significantly impact the behavior and dynamics of currency markets, and failing to account for them can lead to inaccurate or misleading findings.
dfb = df[df["xcat"].isin(["FXTARGETED_NSA", "FXUNTRADABLE_NSA"])].loc[
:, ["cid", "xcat", "real_date", "value"]
]
dfba = (
dfb.groupby(["cid", "real_date"])
.aggregate(value=pd.NamedAgg(column="value", aggfunc="max"))
.reset_index()
)
dfba["xcat"] = "FXBLACK"
fxblack = msp.make_blacklist(dfba, "FXBLACK")
fxblack
{'BRL': (Timestamp('2012-12-03 00:00:00'), Timestamp('2013-09-30 00:00:00')),
'CHF': (Timestamp('2011-10-03 00:00:00'), Timestamp('2015-01-30 00:00:00')),
'CZK': (Timestamp('2014-01-01 00:00:00'), Timestamp('2017-07-31 00:00:00')),
'ILS': (Timestamp('2000-01-03 00:00:00'), Timestamp('2005-12-30 00:00:00')),
'INR': (Timestamp('2000-01-03 00:00:00'), Timestamp('2004-12-31 00:00:00')),
'MYR_1': (Timestamp('2000-01-03 00:00:00'), Timestamp('2007-11-30 00:00:00')),
'MYR_2': (Timestamp('2018-07-02 00:00:00'), Timestamp('2023-05-01 00:00:00')),
'PEN': (Timestamp('2021-07-01 00:00:00'), Timestamp('2021-07-30 00:00:00')),
'RON': (Timestamp('2000-01-03 00:00:00'), Timestamp('2005-11-30 00:00:00')),
'RUB_1': (Timestamp('2000-01-03 00:00:00'), Timestamp('2005-11-30 00:00:00')),
'RUB_2': (Timestamp('2022-02-01 00:00:00'), Timestamp('2023-05-01 00:00:00')),
'SGD': (Timestamp('2000-01-03 00:00:00'), Timestamp('2023-05-01 00:00:00')),
'THB': (Timestamp('2007-01-01 00:00:00'), Timestamp('2008-11-28 00:00:00')),
'TRY_1': (Timestamp('2000-01-03 00:00:00'), Timestamp('2003-09-30 00:00:00')),
'TRY_2': (Timestamp('2020-01-01 00:00:00'), Timestamp('2023-05-01 00:00:00'))}
dublack = {
"TRY": fxblack["TRY_2"]
} # create a customized blacklist for TRY to be used later in the code
Availability #
It is important to assess data availability before conducting any analysis. It allows to identify any potential gaps or limitations in the dataset, which can impact the validity and reliability of analysis and ensure that a sufficient number of observations for each selected category and cross-section is available as well as determining the appropriate time periods for analysis.
msm.check_availability(df, xcats=main, cids=cids)
Transformations and checks #
Features #
Name replacements #
dict_repl = {
"EMPL_NSA_P1Q1QL4": "EMPL_NSA_P1M1ML12_3MMA",
"WFORCE_NSA_P1Q1QL4_20QMM": "WFORCE_NSA_P1Y1YL1_5YMM",
"UNEMPLRATE_NSA_D1Q1QL4": "UNEMPLRATE_NSA_3MMA_D1M1ML12",
"UNEMPLRATE_SA_D1Q1QL1": "UNEMPLRATE_SA_D3M3ML3",
}
for key, value in dict_repl.items():
dfx["xcat"] = dfx["xcat"].str.replace(key, value)
msm.check_availability(dfx, xcats=list(dict_repl.values()), cids=cids)
Labor market scores #
Excess employment growth #
To proxy the impact of the business cycle state on employment growth, a common approach is to calculate the difference between employment growth and the long-term median of workforce growth. This difference is often referred to as “excess employment growth.” By calculating excess employment growth, one can estimate the component of employment growth that is attributable to the business cycle state. This measure helps to identify deviations from the long-term trend and provides insights into the cyclical nature of employment dynamics.
calcs = ["XEMPL_NSA_P1M1ML12_3MMA = EMPL_NSA_P1M1ML12_3MMA - WFORCE_NSA_P1Y1YL1_5YMM "]
dfa = msp.panel_calculator(dfx, calcs=calcs, cids=cids, blacklist=None)
dfx = msm.update_df(dfx, dfa)
The
macrosynergy
package provides two useful functions,
view_ranges()
and
view_timelines()
, which facilitate the convenient visualization of data for selected indicators and cross-sections. These functions assist in plotting means, standard deviations, and time series of the chosen indicators.
xcatx = ["EMPL_NSA_P1M1ML12_3MMA", "WFORCE_NSA_P1Y1YL1_5YMM"]
cidx = cids_mp
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="bar",
sort_cids_by="mean",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
xcatx = ["EMPL_NSA_P1M1ML12_3MMA", "XEMPL_NSA_P1M1ML12_3MMA"]
cidx = cids_mp
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="bar",
sort_cids_by="mean",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
Unemployment rates and gaps #
Unemployment rates and unemployment gaps are commonly used measures in labor market analysis. The unemployment rate is a widely used indicator that measures the percentage of the labor force that is unemployed and actively seeking employment. The unemployment gap refers to the difference between the actual unemployment rate and a reference or target unemployment rate. The unemployment gap is used to assess the deviation of the current unemployment rate from the desired or expected level. Here we compare the standard unemployment rate, sa, 3mma with unemployment rate difference, 3-month moving average minus the 10-year moving median. Comparison between the two can give insights into the short-term fluctuations and the long-term trend of the unemployment rate.
xcatx = ["UNEMPLRATE_SA_3MMA", "UNEMPLRATE_SA_3MMAv10YMM"]
cidx = cids
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="bar",
sort_cids_by="mean",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
Unemployment changes #
We create a simple average of two unemployment growth indicators: unemploent rate change and unemployment growth:
calcs = [
"UNEMPLRATE_DA = 1/2 * ( UNEMPLRATE_NSA_3MMA_D1M1ML12 + UNEMPLRATE_SA_D3M3ML3 )",
]
dfa = msp.panel_calculator(dfx, calcs=calcs, cids=cids, blacklist=None)
dfx = msm.update_df(dfx, dfa)
xcatx = ["UNEMPLRATE_NSA_3MMA_D1M1ML12", "UNEMPLRATE_SA_D3M3ML3", "UNEMPLRATE_DA"]
cidx = cids
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
Labor tightening scores #
We compute two types of labor market z-scores. One is based on the panel and assumes no structural differences in the features quantitative effects across sections. The other is half based on cross-section alone, which implies persistent structural differences in distributions and their impact on targets. For a description and possible options of function
make_zn_scores()
please see either
Kaggle
or under
Academy notebooks
.
xcat_lab = [
"XEMPL_NSA_P1M1ML12_3MMA",
"UNEMPLRATE_DA",
"UNEMPLRATE_SA_3MMAv10YMM",
]
cidx = msm.common_cids(dfx, xcat_lab)
pws = [0.25, 1] # cross-sectional and panel-based normalization
for xc in xcat_lab:
for pw in pws:
dfa = msp.make_zn_scores(
dfx,
xcat=xc,
cids=cidx,
sequential=True,
min_obs=522, # oos scaling after 2 years of panel data
est_freq="m",
neutral="zero",
pan_weight=pw,
thresh=3,
postfix="_ZNP" if pw == 1 else "_ZNM",
)
dfx = msm.update_df(dfx, dfa)
The individual category scores are combined into a single labor market tightness score.
xcatx = [
"XEMPL_NSA_P1M1ML12_3MMA",
"UNEMPLRATE_DA",
"UNEMPLRATE_SA_3MMAv10YMM",
]
cidx = msm.common_cids(dfx, xcat_lab)
# cidx.remove("NZD") # ISSUE: invalid empty series created above
n = len(xcatx)
wx = [1 / n] * n
sx = [1, -1, -1] # signs for tightening
dix = {"ZNP": [xc + "_ZNP" for xc in xcatx], "ZNM": [xc + "_ZNM" for xc in xcatx]}
dfa = pd.DataFrame(columns=dfx.columns).reindex([])
for key, value in dix.items():
dfaa = msp.linear_composite(
dfx,
xcats=value,
weights=wx,
signs=sx,
cids=cidx,
complete_xcats=False, # if some categories are missing the score is based on the remaining
new_xcat="LABTIGHT_" + key,
)
dfa = msm.update_df(dfa, dfaa)
dfx = msm.update_df(dfx, dfa)
xcatx = [xc + "_ZNP" for xc in xcat_lab]
cidx = cids
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="bar",
sort_cids_by="mean",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
To summarize: we created two Labor market tightening indicators: These are a composite of three quantamental indicators that are jointly tracking the usage of the economy’s labor force. The first is employment growth relative to workforce growth, where the former is measured in % over a year ago and 3-month average and the latter is an estimate based on the latest available 5 years of workforce growth. The second sub-indicator measures changes in the unemployment rate over a year ago and over the last three months, both as a 3-month moving average (view documentation here). The third labor market indicator is the level of the unemployment rate versus a 10-year moving median, again as a 3-month moving average. All three indicators are z-scored, then combined with equal weights, and then the combination is again z-scored for subsequent analysis and aggregation. The difference between the two is the difference in the importance of the panel versus the individual cross-sections for scaling the zn-scores. “_ZNP” indicator uses the whole panel data as the basis for the parameters and “_ZNM” uses 1/4 of the whole panel and 3/4 of an individual cross-section.
xcatx = ["LABTIGHT_ZNP", "LABTIGHT_ZNM"]
cidx = cids
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="bar",
sort_cids_by="mean",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
Excess inflation #
Similarly to labor market tightness, we can calculate plausible metrics of excess inflation versus a country’s effective inflation target. To make the targets comparable across markets, the relative target deviations need denominator bases that should never be less than 2, so we clip the Estimated official inflation target for next year at a minimum value of 2 and use it as denominator. We then calculate absolute and relative target deviations for a range of CPI inflation metrics.
dfa = msp.panel_calculator(
dfx,
["INFTEBASIS = INFTEFF_NSA.clip(lower=2)"],
cids=cids,
)
dfx = msm.update_df(dfx, dfa)
infs = [
"CPIH_SA_P1M1ML12",
"CPIH_SJA_P6M6ML6AR",
"CPIC_SA_P1M1ML12",
"CPIC_SJA_P6M6ML6AR",
]
for inf in infs:
calc_iet = f"{inf}vIETR = ( {inf} - INFTEFF_NSA ) / INFTEBASIS"
dfa = msp.panel_calculator(dfx, calcs=[calc_iet], cids=cids)
dfx = msm.update_df(dfx, dfa)
xcatx = [inf + "vIETR" for inf in infs]
cidx = cids
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="box",
sort_cids_by="mean",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
The individual excess inflation metrics are similar in size and, hence can be directly combined into a composite excess inflation metric.
xcatx = [inf + "vIETR" for inf in infs]
cidx = cids
dfa = msp.linear_composite(
dfx,
xcats=xcatx,
cids=cidx,
complete_xcats=False, # if some categories are missing the score is based on the remaining
new_xcat="CPI_PCHvIETR",
)
dfx = msm.update_df(dfx, dfa)
As before, we normalize values for the composite excess inflation metric around zero based on the whole panel.
xcatx = "CPI_PCHvIETR"
cidx = cids
dfa = msp.make_zn_scores(
dfx,
xcat=xcatx,
cids=cidx,
sequential=True,
min_obs=522, # oos scaling after 2 years of panel data
est_freq="m",
neutral="zero",
pan_weight=1,
thresh=2.5,
postfix="_ZNP",
)
dfx = msm.update_df(dfx, dfa)
xcatx = ["CPI_PCHvIETR", "CPI_PCHvIETR_ZNP"]
cidx = cids
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="box",
sort_cids_by="mean",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx[0:2],
cids=cidx,
ncol=5,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
Excess growth #
Excess real-time growth estimates are z-scored for intuitive interpretation and to winsorize large outliers, which often reflect temporary disruptions and data issues. JPMaQS offers a ready-made indicator of excess estimated GDP growth trend, labelled
INTRGDPv5Y_NSA_P1M1ML12_3MMA
. For each day this is the latest estimated GDP growth trend (% over a year ago, 3-month moving average) minus a 5-year median of that country’s actual GDP growth rate. The historic median represents the growth rate that businesses and markets have grown used to. The GDP growth trend is estimated based on actual national accounts and monthly activity data, based on sets of regressions that replicate conventional charting methods in markets (view full documentation here). For subsequent aggregation and analysis, we then z-score the indicator (normalize volatility) around its zero value on an expanding out-of-sample basis using all cross sections for estimating the standard deviations. As before, we normalize values for the indicator around zero based on the whole panel.
xcatx = "INTRGDPv5Y_NSA_P1M1ML12_3MMA"
cidx = cids
dfa = msp.make_zn_scores(
dfx,
xcat=xcatx,
cids=cidx,
sequential=True,
min_obs=522, # oos scaling after 2 years of panel data
est_freq="m",
neutral="zero",
pan_weight=1,
# thresh=3,
postfix="_ZNP",
)
dfx = msm.update_df(dfx, dfa)
xcatx = ["INTRGDPv5Y_NSA_P1M1ML12_3MMA", "INTRGDPv5Y_NSA_P1M1ML12_3MMA_ZNP"]
cidx = cids
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="box",
sort_cids_by="mean",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx[0:2],
cids=cidx,
ncol=4,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
Features relative to the base currency #
cycles = [
"LABTIGHT",
"CPI_PCHvIETR",
"INTRGDPv5Y_NSA_P1M1ML12_3MMA",
]
xcatx = [cc + "_ZNP" for cc in cycles]
for xc in xcatx:
calc_eur = [f"{xc}vBM = {xc} - iEUR_{xc}"]
calc_usd = [f"{xc}vBM = {xc} - iUSD_{xc}"]
calc_eud = [f"{xc}vBM = {xc} - 0.5 * ( iEUR_{xc} + iUSD_{xc} )"]
dfa_eur = msp.panel_calculator(dfx, calcs=calc_eur, cids=cids_eur)
dfa_usd = msp.panel_calculator(dfx, calcs=calc_usd, cids=cids_usd + ["SGD"])
dfa_eud = msp.panel_calculator(dfx, calcs=calc_eud, cids=cids_eud)
dfa = pd.concat([dfa_eur, dfa_usd, dfa_eud])
dfx = msm.update_df(dfx, dfa)
xcatx = ["LABTIGHT_ZNP", "LABTIGHT_ZNPvBM"]
cidx = cids_fx
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="bar",
sort_cids_by="mean",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx[0:2],
cids=cidx,
ncol=4,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
)
Composite z-scores #
We calculate composite zn-scores of cyclical strength with and without labor market tightness. We also calculate composite zn-score differences to FX base currencies with and without labor market tightness.
# Cyclical strength constituents and list of its keys
d_cs = {
"G": "INTRGDPv5Y_NSA_P1M1ML12_3MMA",
"I": "CPI_PCHvIETR",
"L": "LABTIGHT",
# "C": "XPCREDITBN_SJA_P1M1ML12", not so relevant for cyclical strength
}
cs_keys = list(d_cs.keys())
# Available cross-sections
xcatx_znp = [d_cs[i] + "_ZNP" for i in cs_keys]
cidx_znp = msm.common_cids(dfx, xcatx_znp)
xcatx_vbm = [d_cs[i] + "_ZNPvBM" for i in cs_keys]
cidx_vbm = msm.common_cids(dfx, xcatx_vbm)
d_ar = {"_ZNP": cidx_znp, "_ZNPvBM": cidx_vbm}
# Collect all cycle strength key combinations
cs_combs = [combo for r in range(1, 5) for combo in combinations(cs_keys, r)]
# Use key combinations to calculate all possible factor combinations
dfa = pd.DataFrame(columns=dfx.columns).reindex([])
for cs in cs_combs:
for key, value in d_ar.items():
xcatx = [
d_cs[i] + key for i in cs
] # extract absolute or relative xcat combination
dfaa = msp.linear_composite(
dfx,
xcats=xcatx,
cids=value,
complete_xcats=False, # if some categories are missing the score is based on the remaining
new_xcat="CS" + "".join(cs) + key[4:] + "_ZC",
)
dfa = msm.update_df(dfa, dfaa)
dfx = msm.update_df(dfx, dfa)
# Collect factor combinations in lists
cs_all = dfa["xcat"].unique()
cs_dir = [cs for cs in cs_all if "vBM" not in cs]
cs_rel = [cs for cs in cs_all if "vBM" in cs]
xcatx = ["CSG_ZC"]
cidx = cidx_znp
msp.view_timelines(
dfx,
xcats=xcatx[0:2],
cids=cidx,
ncol=5,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
title="Excess GDP growth z-scores",
)
xcatx = ["CSL_ZC"]
cidx = cidx_znp
msp.view_timelines(
dfx,
xcats=xcatx[0:2],
cids=cidx,
ncol=5,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
title="Labor market tightness composite z-scores",
)
xcatx = ["CSI_ZC"]
cidx = cidx_znp
msp.view_timelines(
dfx,
xcats=xcatx[0:2],
cids=cidx,
ncol=5,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
title="Excess CPI inflation z-scores",
)
xcatx = ["CSGIL_ZC", "CSGILvBM_ZC"]
cidx = cidx_znp
msp.view_timelines(
dfx,
xcats=xcatx[0:2],
xcat_labels=["outright score", "relative to benchmark currency"],
cids=cidx,
ncol=5,
cumsum=False,
start="2000-01-01",
same_y=False,
size=(12, 12),
all_xticks=True,
title="Composite cyclical strength scores, outright and versus benchmark currency area",
)
Targets #
Directional vol-targeted IRS returns #
xcatx = ["DU02YXR_VT10", "DU05YXR_VT10"]
cidx = list(set(cids_du) - set(["TRY"]))
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="box",
sort_cids_by="std",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=True,
start="2000-01-01",
same_y=True,
size=(12, 12),
all_xticks=True,
)
Directional equity returns #
xcatx = ["EQXR_NSA", "EQXR_VT10"]
cidx = cids_eq
msp.view_ranges(
dfx,
cids=cidx,
xcats=xcatx,
kind="box",
sort_cids_by="std",
ylab="% daily rate",
start="2000-01-01",
)
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=True,
start="2000-01-01",
same_y=True,
size=(12, 12),
all_xticks=True,
)
FX returns relative to base currencies #
xcatx = ["FXXR_NSA", "FXXR_VT10"]
cidx = cids_fx
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=True,
start="2000-01-01",
same_y=True,
size=(12, 12),
all_xticks=True,
)
FX versus equity returns #
cidx_fxeq = msm.common_cids(dfx, ["FXXR_VT10", "EQXR_VT10"])
calcs = ["FXvEQXR = FXXR_VT10 - EQXR_VT10 "]
dfa = msp.panel_calculator(dfx, calcs=calcs, cids=cidx_fxeq, blacklist=None)
dfx = msm.update_df(dfx, dfa)
xcatx = ["FXvEQXR"]
cidx = cidx_fxeq
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=True,
start="2000-01-01",
same_y=True,
size=(12, 12),
all_xticks=True,
)
FX versus IRS returns #
cidx_fxdu = list(
set(msm.common_cids(dfx, ["FXXR_VT10", "DU05YXR_VT10"])) - set(["IDR"])
)
calcs = ["FXvDU05XR = FXXR_VT10 - DU05YXR_VT10 "]
dfa = msp.panel_calculator(dfx, calcs=calcs, cids=cidx_fxdu, blacklist=dublack)
dfx = msm.update_df(dfx, dfa)
xcatx = ["FXvDU05XR"]
cidx = cidx_fxdu
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=True,
start="2000-01-01",
same_y=True,
size=(12, 12),
all_xticks=True,
)
2s-5s flattener returns #
cidx_du52 = list(
set(msm.common_cids(dfx, ["DU02YXR_VT10", "DU05YXR_VT10"])) - set(["IDR"])
)
calcs = ["DU05v02XR = DU05YXR_VT10 - DU02YXR_VT10 "]
dfa = msp.panel_calculator(dfx, calcs=calcs, cids=cidx_du52, blacklist=dublack)
dfx = msm.update_df(dfx, dfa)
xcatx = ["DU05v02XR"]
cidx = cidx_du52
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=4,
cumsum=True,
start="2000-01-01",
same_y=True,
size=(12, 12),
all_xticks=True,
)
Value checks #
Directional equity strategy #
Specs and panel test #
sigs = cs_dir
ms = "CSGIL_ZC" # main signal
oths = list(set(sigs) - set([ms])) # other signals
targ = "EQXR_VT10"
cidx = msm.common_cids(dfx, sigs + [targ])
# cidx = list(set(cids_dm) & set(cidx)) # for DM alone
dict_eqdi = {
"sig": ms,
"rivs": oths,
"targ": targ,
"cidx": cidx,
"black": fxblack,
"srr": None,
"pnls": None,
}
dix = dict_eqdi
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
crx = msp.CategoryRelations(
dfx,
xcats=[sig, targ],
cids=cidx,
freq="Q", # quarterly frequency allows for policy inertia
lag=1,
xcat_aggs=["last", "sum"],
start="2000-01-01",
blacklist=blax,
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Cyclical strength composite score, end of quarter",
ylab="Equity index future return next quarter for 10% vol target",
title="Cyclical strength and subsequent equity index futures returns",
size=(10, 6),
prob_est="map",
)
Accuracy and correlation check #
dix = dict_eqdi
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
srr = mss.SignalReturnRelations(
dfx,
cids=cidx,
sigs=[sig] + rivs,
sig_neg=[True] + [True] * len(rivs),
rets=targ,
freqs="M",
start="2000-01-01",
blacklist=blax,
)
dix["srr"] = srr
dix = dict_eqdi
srrx = dix["srr"]
display(srrx.summary_table().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | |
---|---|---|---|---|---|---|---|---|---|---|---|
M: CSGIL_ZC_NEG/last => EQXR_VT10 | 0.527 | 0.525 | 0.510 | 0.593 | 0.617 | 0.433 | 0.103 | 0.000 | 0.055 | 0.000 | 0.526 |
Mean years | 0.521 | 0.512 | 0.493 | 0.592 | 0.598 | 0.425 | 0.040 | 0.412 | 0.023 | 0.458 | 0.511 |
Positive ratio | 0.542 | 0.625 | 0.583 | 0.667 | 0.750 | 0.292 | 0.500 | 0.375 | 0.500 | 0.375 | 0.625 |
Mean cids | 0.525 | 0.522 | 0.508 | 0.589 | 0.610 | 0.434 | 0.104 | 0.229 | 0.053 | 0.325 | 0.522 |
Positive ratio | 0.765 | 0.706 | 0.471 | 0.941 | 0.941 | 0.059 | 0.941 | 0.882 | 0.824 | 0.647 | 0.706 |
dix = dict_eqdi
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Return | Signal | Frequency | Aggregation | |||||||||||
EQXR_VT10 | CSGIL_ZC_NEG | M | last | 0.527 | 0.525 | 0.510 | 0.593 | 0.617 | 0.433 | 0.103 | 0.000 | 0.055 | 0.000 | 0.526 |
CSGI_ZC_NEG | M | last | 0.535 | 0.525 | 0.557 | 0.593 | 0.615 | 0.435 | 0.090 | 0.000 | 0.050 | 0.000 | 0.526 | |
CSGL_ZC_NEG | M | last | 0.492 | 0.501 | 0.452 | 0.593 | 0.593 | 0.408 | 0.077 | 0.000 | 0.033 | 0.002 | 0.501 | |
CSG_ZC_NEG | M | last | 0.516 | 0.510 | 0.531 | 0.593 | 0.602 | 0.418 | 0.052 | 0.001 | 0.014 | 0.177 | 0.510 | |
CSIL_ZC_NEG | M | last | 0.528 | 0.527 | 0.503 | 0.593 | 0.619 | 0.435 | 0.112 | 0.000 | 0.066 | 0.000 | 0.528 | |
CSI_ZC_NEG | M | last | 0.537 | 0.527 | 0.556 | 0.593 | 0.617 | 0.437 | 0.087 | 0.000 | 0.051 | 0.000 | 0.527 | |
CSL_ZC_NEG | M | last | 0.496 | 0.514 | 0.406 | 0.593 | 0.611 | 0.418 | 0.083 | 0.000 | 0.046 | 0.000 | 0.514 |
dix = dict_eqdi
srrx = dix["srr"]
srrx.accuracy_bars(
type="years",
title="Accuracy of monthly predictions of FX forward returns for 26 EM and DM currencies",
size=(14, 6),
)
Naive PnL #
dix = dict_eqdi
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
naive_pnl = msn.NaivePnL(
dfx,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
blacklist=blax,
bms=["USD_EQXR_NSA"],
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=True,
sig_op="zn_score_pan",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_PZN",
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=True,
sig_op="binary",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_BIN",
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
dix = dict_eqdi
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + x for x in ["_PZN", "_BIN"]] + ["Long only"]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_eqdi
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + "_PZN"] + ["Long only"]
dict_labels={"CSGIL_ZC_PZN": "based on negative of cyclical strength z-score",
"Long only": "long only portfolio across 18 currencies (risk parity)"}
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title="Equity index future PnL across 18 markets",
xcat_labels=dict_labels,
ylab="% of risk capital, for 10% annualized long-term vol, no compounding",
figsize=(16, 8),
)
dix = dict_eqdi
sigx = dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + "_PZN" for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_eqdi
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN", "_BIN"]]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | USD_EQXR_NSA correl | Traded Months | |
---|---|---|---|---|---|---|---|---|
xcat | ||||||||
CSGIL_ZC_BIN | 5.631627 | 10.0 | 0.563163 | 0.827838 | -12.574485 | -15.858519 | -0.040013 | 280 |
CSGIL_ZC_PZN | 7.168371 | 10.0 | 0.716837 | 1.095916 | -15.928615 | -16.286255 | 0.020557 | 280 |
CSGI_ZC_BIN | 5.907057 | 10.0 | 0.590706 | 0.848992 | -15.441828 | -18.891451 | 0.045406 | 280 |
CSGI_ZC_PZN | 6.714018 | 10.0 | 0.671402 | 1.009097 | -15.17655 | -16.132144 | 0.107973 | 280 |
CSGL_ZC_BIN | 0.214855 | 10.0 | 0.021485 | 0.031388 | -12.271435 | -19.965225 | 0.002893 | 280 |
CSGL_ZC_PZN | 5.071182 | 10.0 | 0.507118 | 0.78245 | -15.829626 | -18.60795 | 0.095921 | 280 |
CSG_ZC_BIN | 1.455893 | 10.0 | 0.145589 | 0.207849 | -22.698703 | -23.789479 | 0.199808 | 280 |
CSG_ZC_PZN | 3.786492 | 10.0 | 0.378649 | 0.578135 | -14.450651 | -26.506881 | 0.255389 | 280 |
CSIL_ZC_BIN | 6.511647 | 10.0 | 0.651165 | 0.951244 | -13.67222 | -16.624512 | -0.131998 | 280 |
CSIL_ZC_PZN | 7.4559 | 10.0 | 0.74559 | 1.108323 | -19.844673 | -23.7039 | -0.156646 | 280 |
CSI_ZC_BIN | 7.049114 | 10.0 | 0.704911 | 1.002462 | -22.004282 | -16.719062 | -0.014971 | 280 |
CSI_ZC_PZN | 6.346358 | 10.0 | 0.634636 | 0.916943 | -19.738179 | -24.966872 | -0.100821 | 280 |
CSL_ZC_BIN | 3.078537 | 10.0 | 0.307854 | 0.4647 | -12.836093 | -17.641988 | -0.224892 | 280 |
CSL_ZC_PZN | 5.199916 | 10.0 | 0.519992 | 0.782508 | -16.123457 | -14.959692 | -0.160291 | 280 |
dix = dict_eqdi
sig = dix["sig"]
naive_pnl = dix["pnls"]
naive_pnl.signal_heatmap(
pnl_name=sig + "_PZN", freq="q", start="2000-01-01", figsize=(16, 5)
)
Directional FX strategy #
Specs and panel test #
sigs = cs_rel
ms = "CSGILvBM_ZC" # main signal
oths = list(set(sigs) - set([ms])) # other signals
targ = "FXXR_VT10"
cidx = msm.common_cids(dfx, sigs + [targ])
# cidx = list(set(cids_dm) & set(cidx)) # for DM alone
dict_fxdi = {
"sig": ms,
"rivs": oths,
"targ": targ,
"cidx": cidx,
"black": None,
"srr": None,
"pnls": None,
}
dix = dict_fxdi
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
crx = msp.CategoryRelations(
dfx,
xcats=[sig, targ],
cids=cidx,
freq="Q", # quarterly frequency allows for policy inertia
lag=1,
xcat_aggs=["last", "sum"],
start="2000-01-01",
blacklist=blax,
xcat_trims=[1000, 40],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Cyclical strength composite score versus benchmark currency area, end of quarter",
ylab="1-month FX foward return next quarter for 10% vol target",
title="Relative cyclical strength and subsequent FX forward returns, 2000-2023 (Apr)",
size=(10, 6),
prob_est="map",
)
Accuracy and correlation check #
dix = dict_fxdi
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
srr = mss.SignalReturnRelations(
dfx,
cids=cidx,
sigs=[sig] + rivs,
rets=targ,
freqs="M",
start="2000-01-01",
blacklist=blax,
)
dix["srr"] = srr
dix = dict_fxdi
srrx = dix["srr"]
display(srrx.summary_table().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | |
---|---|---|---|---|---|---|---|---|---|---|---|
M: CSGILvBM_ZC/last => FXXR_VT10 | 0.521 | 0.526 | 0.455 | 0.547 | 0.575 | 0.477 | 0.073 | 0.000 | 0.049 | 0.000 | 0.526 |
Mean years | 0.520 | 0.517 | 0.453 | 0.546 | 0.562 | 0.471 | 0.057 | 0.299 | 0.035 | 0.287 | 0.514 |
Positive ratio | 0.625 | 0.708 | 0.417 | 0.708 | 0.792 | 0.375 | 0.750 | 0.667 | 0.750 | 0.667 | 0.708 |
Mean cids | 0.521 | 0.522 | 0.457 | 0.548 | 0.572 | 0.471 | 0.070 | 0.314 | 0.044 | 0.322 | 0.521 |
Positive ratio | 0.741 | 0.741 | 0.259 | 0.889 | 0.889 | 0.333 | 0.778 | 0.630 | 0.815 | 0.593 | 0.741 |
dix = dict_fxdi
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Return | Signal | Frequency | Aggregation | |||||||||||
FXXR_VT10 | CSGILvBM_ZC | M | last | 0.521 | 0.526 | 0.455 | 0.547 | 0.575 | 0.477 | 0.073 | 0.000 | 0.049 | 0.000 | 0.526 |
CSGIvBM_ZC | M | last | 0.506 | 0.509 | 0.467 | 0.545 | 0.555 | 0.463 | 0.049 | 0.000 | 0.032 | 0.000 | 0.509 | |
CSGLvBM_ZC | M | last | 0.522 | 0.523 | 0.487 | 0.547 | 0.570 | 0.476 | 0.061 | 0.000 | 0.044 | 0.000 | 0.523 | |
CSGvBM_ZC | M | last | 0.512 | 0.513 | 0.489 | 0.540 | 0.553 | 0.472 | 0.023 | 0.056 | 0.014 | 0.069 | 0.513 | |
CSILvBM_ZC | M | last | 0.529 | 0.533 | 0.461 | 0.546 | 0.581 | 0.485 | 0.083 | 0.000 | 0.059 | 0.000 | 0.533 | |
CSIvBM_ZC | M | last | 0.515 | 0.519 | 0.458 | 0.543 | 0.564 | 0.475 | 0.053 | 0.000 | 0.038 | 0.000 | 0.519 | |
CSLvBM_ZC | M | last | 0.529 | 0.531 | 0.476 | 0.545 | 0.578 | 0.485 | 0.077 | 0.000 | 0.058 | 0.000 | 0.532 |
dix = dict_fxdi
srrx = dix["srr"]
srrx.accuracy_bars(
type="years",
# title="",
size=(14, 6),
)
Naive PnL #
dix = dict_fxdi
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
naive_pnl = msn.NaivePnL(
dfx,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
blacklist=blax,
bms=["USD_EQXR_NSA"],
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="zn_score_pan",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_PZN",
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="binary",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_BIN",
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
dix = dict_fxdi
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + x for x in ["_PZN", "_BIN"]] + ["Long only"]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_fxdi
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + "_PZN"] + ["Long only"]
dict_labels={"CSGILvBM_ZC_PZN":"based on relative cyclical strength z-score",
"Long only": "long only portfolio in all 27 smaller currencies (versus USD and EUR, risk parity)"}
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title="FX forward PnL across 27 currency areas (ex USD and EUR)",
xcat_labels=dict_labels,
ylab="% of risk capital, for 10% annualized long-term vol, no compounding",
figsize=(16, 8),
)
dix = dict_fxdi
sigx = dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + "_PZN" for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_fxdi
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN", "_BIN"]]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | USD_EQXR_NSA correl | Traded Months | |
---|---|---|---|---|---|---|---|---|
xcat | ||||||||
CSGILvBM_ZC_BIN | 7.581866 | 10.0 | 0.758187 | 1.133885 | -10.905731 | -22.371665 | 0.017322 | 280 |
CSGILvBM_ZC_PZN | 9.145749 | 10.0 | 0.914575 | 1.36518 | -15.194445 | -28.776567 | 0.067821 | 280 |
CSGIvBM_ZC_BIN | 2.414683 | 10.0 | 0.241468 | 0.35028 | -11.589378 | -24.538037 | 0.061624 | 280 |
CSGIvBM_ZC_PZN | 7.124415 | 10.0 | 0.712442 | 1.064247 | -9.993622 | -21.927455 | 0.059716 | 280 |
CSGLvBM_ZC_BIN | 6.420957 | 10.0 | 0.642096 | 0.921038 | -15.215601 | -25.991379 | 0.010563 | 280 |
CSGLvBM_ZC_PZN | 8.091624 | 10.0 | 0.809162 | 1.202398 | -16.428007 | -29.305222 | 0.020714 | 280 |
CSGvBM_ZC_BIN | 4.708552 | 10.0 | 0.470855 | 0.674232 | -15.420626 | -18.747611 | -0.050529 | 280 |
CSGvBM_ZC_PZN | 3.8198 | 10.0 | 0.38198 | 0.558218 | -14.163976 | -22.241164 | -0.037362 | 280 |
CSILvBM_ZC_BIN | 8.083422 | 10.0 | 0.808342 | 1.200437 | -15.646868 | -27.33098 | 0.05015 | 280 |
CSILvBM_ZC_PZN | 9.195971 | 10.0 | 0.919597 | 1.365385 | -16.959438 | -30.475521 | 0.102198 | 280 |
CSIvBM_ZC_BIN | 4.153229 | 10.0 | 0.415323 | 0.602003 | -10.815824 | -20.837042 | 0.079615 | 280 |
CSIvBM_ZC_PZN | 6.951853 | 10.0 | 0.695185 | 1.030468 | -14.423997 | -25.205693 | 0.117467 | 280 |
CSLvBM_ZC_BIN | 7.268241 | 10.0 | 0.726824 | 1.047902 | -15.6895 | -32.47741 | 0.007421 | 280 |
CSLvBM_ZC_PZN | 8.236876 | 10.0 | 0.823688 | 1.201569 | -21.193315 | -37.331915 | 0.061815 | 280 |
dix = dict_fxdi
sig = dix["sig"]
naive_pnl = dix["pnls"]
naive_pnl.signal_heatmap(
pnl_name=sig + "_PZN", freq="m", start="2000-01-01", figsize=(16, 8)
)
Directional IRS strategy #
Specs and panel test #
sigs = cs_dir
ms = "CSGIL_ZC" # main signal
oths = list(set(sigs) - set([ms])) # other signals
targ = "DU05YXR_VT10" # "DU02YXR_VT10"
cidx = msm.common_cids(dfx, sigs + [targ])
# cidx = list(set(cids_dm) & set(cidx)) # for DM alone
dict_dudi = {
"sig": ms,
"rivs": oths,
"targ": targ,
"cidx": cidx,
"black": dublack,
"srr": None,
"pnls": None,
}
dix = dict_dudi
cidx = dix["cidx"]
print(len(cidx))
", ".join(cidx)
25
'AUD, CAD, CHF, CLP, COP, CZK, EUR, GBP, HUF, ILS, JPY, KRW, MXN, MYR, NOK, NZD, PLN, RUB, SEK, SGD, THB, TRY, TWD, USD, ZAR'
dix = dict_dudi
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
crx = msp.CategoryRelations(
dfx,
xcats=[sig, targ],
cids=cidx,
freq="Q",
lag=1,
xcat_aggs=["last", "sum"],
start="2000-01-01",
blacklist=blax,
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Cyclical strength composite score, end of quarter",
ylab="5-year IRS return next quarter for 10% vol target",
title="Cyclical strength and subsequent 5-year IRS returns, 2000-2023 (Apr)",
size=(10, 6),
prob_est="map",
)
dix = dict_dudi
sig = dix["sig"]
targ = dix["targ"]
cidx = ["EUR", "USD"]
blax = dix["black"]
crx = msp.CategoryRelations(
dfx,
xcats=[sig, targ],
cids=cidx,
freq="Q",
lag=1,
xcat_aggs=["last", "sum"],
start="2000-01-01",
blacklist=blax,
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Cyclical strength composite score, end of quarter",
ylab="5-year IRS return next quarter for 10% vol target",
title="Cyclical strength and subsequent 5-year IRS returns, U.S. and euro area only, 2000-2023",
size=(10, 6),
prob_est="map",
)
Accuracy and correlation check #
dix = dict_dudi
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
srr = mss.SignalReturnRelations(
dfx,
cids=cidx,
sigs=[sig] + rivs,
sig_neg=[True] + [True] * len(rivs),
rets=targ,
freqs="M",
start="2000-01-01",
blacklist=blax,
)
dix["srr"] = srr
dix = dict_dudi
srrx = dix["srr"]
display(srrx.summary_table().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | |
---|---|---|---|---|---|---|---|---|---|---|---|
M: CSGIL_ZC_NEG/last => DU05YXR_VT10 | 0.524 | 0.524 | 0.504 | 0.547 | 0.570 | 0.477 | 0.045 | 0.001 | 0.030 | 0.000 | 0.524 |
Mean years | 0.513 | 0.508 | 0.487 | 0.558 | 0.566 | 0.450 | 0.005 | 0.441 | 0.007 | 0.430 | 0.508 |
Positive ratio | 0.625 | 0.583 | 0.458 | 0.792 | 0.750 | 0.292 | 0.542 | 0.292 | 0.583 | 0.375 | 0.583 |
Mean cids | 0.523 | 0.523 | 0.502 | 0.545 | 0.569 | 0.477 | 0.043 | 0.472 | 0.029 | 0.487 | 0.523 |
Positive ratio | 0.840 | 0.760 | 0.440 | 0.960 | 0.960 | 0.280 | 0.800 | 0.520 | 0.720 | 0.520 | 0.760 |
Labor market dynamics are good predictors, labor market status is not, supporting the hypothesis that fixed-income markets are only inattentive to recent dynamics but not to the broad state of the economy.
dix = dict_dudi
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Return | Signal | Frequency | Aggregation | |||||||||||
DU05YXR_VT10 | CSGIL_ZC_NEG | M | last | 0.524 | 0.524 | 0.504 | 0.547 | 0.570 | 0.477 | 0.045 | 0.001 | 0.030 | 0.000 | 0.524 |
CSGI_ZC_NEG | M | last | 0.527 | 0.521 | 0.564 | 0.547 | 0.565 | 0.477 | 0.047 | 0.000 | 0.029 | 0.001 | 0.521 | |
CSGL_ZC_NEG | M | last | 0.513 | 0.516 | 0.464 | 0.547 | 0.564 | 0.468 | 0.034 | 0.009 | 0.034 | 0.000 | 0.516 | |
CSG_ZC_NEG | M | last | 0.518 | 0.514 | 0.538 | 0.547 | 0.560 | 0.469 | 0.032 | 0.014 | 0.030 | 0.000 | 0.514 | |
CSIL_ZC_NEG | M | last | 0.515 | 0.515 | 0.494 | 0.547 | 0.562 | 0.468 | 0.040 | 0.002 | 0.025 | 0.003 | 0.516 | |
CSI_ZC_NEG | M | last | 0.524 | 0.519 | 0.549 | 0.546 | 0.563 | 0.476 | 0.034 | 0.009 | 0.025 | 0.005 | 0.519 | |
CSL_ZC_NEG | M | last | 0.499 | 0.511 | 0.383 | 0.546 | 0.559 | 0.462 | 0.027 | 0.042 | 0.025 | 0.005 | 0.510 |
dix = dict_dudi
srrx = dix["srr"]
srrx.accuracy_bars(
type="years",
# title="Accuracy of monthly predictions of FX forward returns for 26 EM and DM currencies",
size=(14, 6),
)
Naive PnL #
dix = dict_dudi
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
naive_pnl = msn.NaivePnL(
dfx,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
blacklist=blax,
bms=["USD_EQXR_NSA", "USD_DU05YXR_VT10"],
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=True,
sig_op="zn_score_pan",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_PZN",
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=True,
sig_op="binary",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_BIN",
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
dix = dict_dudi
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + x for x in ["_PZN", "_BIN"]] + ["Long only"]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_dudi
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + "_PZN"] + ["Long only"]
dict_labels={"CSGIL_ZC_PZN":"based on negative of cyclical strength z-score",
"Long only": "receiver only portfolio across 25 currencies (risk parity)"}
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title="5-year interest rate swap PnL across 25 markets",
xcat_labels=dict_labels,
ylab="% of risk capital, for 10% annualized long-term vol, no compounding",
figsize=(16, 8),
)
dix = dict_dudi
sigx = dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + "_PZN" for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_dudi
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN", "_BIN"]]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | USD_EQXR_NSA correl | USD_DU05YXR_VT10 correl | Traded Months | |
---|---|---|---|---|---|---|---|---|---|
xcat | |||||||||
CSGIL_ZC_BIN | 4.797406 | 10.0 | 0.479741 | 0.665956 | -33.694961 | -47.810115 | -0.019424 | -0.010113 | 280 |
CSGIL_ZC_PZN | 3.935876 | 10.0 | 0.393588 | 0.54157 | -45.462648 | -67.184684 | -0.038617 | -0.007781 | 280 |
CSGI_ZC_BIN | 4.990508 | 10.0 | 0.499051 | 0.686961 | -32.700064 | -47.632166 | -0.046778 | 0.108698 | 280 |
CSGI_ZC_PZN | 4.681551 | 10.0 | 0.468155 | 0.651289 | -42.022773 | -63.062648 | -0.063336 | 0.077991 | 280 |
CSGL_ZC_BIN | 3.017045 | 10.0 | 0.301705 | 0.422572 | -34.509501 | -50.57158 | -0.024413 | -0.141108 | 280 |
CSGL_ZC_PZN | 3.527915 | 10.0 | 0.352791 | 0.47902 | -51.83391 | -78.395939 | -0.01665 | -0.03381 | 280 |
CSG_ZC_BIN | 4.589429 | 10.0 | 0.458943 | 0.648392 | -33.892968 | -49.668087 | -0.053725 | 0.050331 | 280 |
CSG_ZC_PZN | 4.531657 | 10.0 | 0.453166 | 0.624481 | -49.565539 | -76.347219 | -0.043994 | 0.084394 | 280 |
CSIL_ZC_BIN | 1.459482 | 10.0 | 0.145948 | 0.200617 | -25.684814 | -45.528623 | -0.001844 | -0.029943 | 280 |
CSIL_ZC_PZN | 3.156607 | 10.0 | 0.315661 | 0.43886 | -31.004282 | -48.197922 | -0.030765 | -0.064269 | 280 |
CSI_ZC_BIN | 3.737409 | 10.0 | 0.373741 | 0.526586 | -18.967709 | -46.946868 | -0.057709 | 0.099768 | 280 |
CSI_ZC_PZN | 3.284128 | 10.0 | 0.328413 | 0.462039 | -18.846375 | -56.491783 | -0.06453 | 0.032585 | 280 |
CSL_ZC_BIN | -0.601205 | 10.0 | -0.060121 | -0.082189 | -34.285089 | -44.81244 | 0.055185 | -0.259587 | 280 |
CSL_ZC_PZN | 1.741007 | 10.0 | 0.174101 | 0.238338 | -42.016349 | -59.187482 | 0.031392 | -0.200951 | 280 |
dix = dict_dudi
sig = dix["sig"]
naive_pnl = dix["pnls"]
naive_pnl.signal_heatmap(
pnl_name=sig + "_PZN", freq="m", start="2000-01-01", figsize=(16, 8)
)
FX versus equity strategy (directional features) #
Specs and panel test #
sigs = cs_dir
ms = "CSGIL_ZC" # main signal
oths = list(set(sigs) - set([ms])) # other signals
targ = "FXvEQXR"
cidx = msm.common_cids(dfx, sigs + [targ])
cidx = list(set(cidx_fxeq) & set(cidx))
dict_fxeq = {
"sig": ms,
"rivs": oths,
"targ": targ,
"cidx": cidx,
"black": fxblack,
"srr": None,
"pnls": None,
}
dix = dict_fxeq
cidx = dix["cidx"]
cidx.sort()
print(len(cidx))
", ".join(cidx)
17
'AUD, BRL, CAD, CHF, EUR, GBP, JPY, KRW, MXN, MYR, PLN, SEK, SGD, THB, TRY, TWD, ZAR'
dix = dict_fxeq
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
crx = msp.CategoryRelations(
dfx,
xcats=[sig, targ],
cids=cidx,
freq="Q", # quarterly frequency allows for policy inertia
lag=1,
xcat_aggs=["last", "sum"],
start="2000-01-01",
blacklist=blax,
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Cyclical strength composite score, end of quarter",
ylab="FX forward versus equity future return next quarter (both 10% vol target)",
title="Cyclical strength and subsequent FX versus equity returns, 2000-2023 (Apr)",
size=(10, 6),
prob_est="map",
)
Accuracy and correlation check #
dix = dict_fxeq
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
srr = mss.SignalReturnRelations(
dfx,
cids=cidx,
sigs=[sig] + rivs,
rets=targ,
freqs="M",
start="2000-01-01",
blacklist=blax,
)
dix["srr"] = srr
dix = dict_fxeq
srrx = dix["srr"]
display(srrx.summary_table().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | |
---|---|---|---|---|---|---|---|---|---|---|---|
M: CSGIL_ZC/last => FXvEQXR | 0.520 | 0.519 | 0.484 | 0.457 | 0.476 | 0.562 | 0.045 | 0.005 | 0.027 | 0.012 | 0.519 |
Mean years | 0.516 | 0.512 | 0.500 | 0.456 | 0.466 | 0.559 | 0.049 | 0.381 | 0.020 | 0.408 | 0.510 |
Positive ratio | 0.708 | 0.500 | 0.417 | 0.250 | 0.292 | 0.750 | 0.667 | 0.417 | 0.500 | 0.375 | 0.500 |
Mean cids | 0.522 | 0.516 | 0.486 | 0.459 | 0.474 | 0.557 | 0.042 | 0.508 | 0.023 | 0.516 | 0.515 |
Positive ratio | 0.812 | 0.688 | 0.500 | 0.125 | 0.125 | 0.750 | 0.750 | 0.500 | 0.812 | 0.312 | 0.688 |
dix = dict_fxeq
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Return | Signal | Frequency | Aggregation | |||||||||||
FXvEQXR | CSGIL_ZC | M | last | 0.520 | 0.519 | 0.483 | 0.457 | 0.476 | 0.562 | 0.045 | 0.005 | 0.027 | 0.012 | 0.519 |
CSGI_ZC | M | last | 0.516 | 0.512 | 0.442 | 0.457 | 0.470 | 0.554 | 0.033 | 0.042 | 0.020 | 0.068 | 0.512 | |
CSGL_ZC | M | last | 0.498 | 0.501 | 0.542 | 0.457 | 0.458 | 0.545 | 0.027 | 0.095 | 0.014 | 0.187 | 0.501 | |
CSG_ZC | M | last | 0.501 | 0.497 | 0.463 | 0.457 | 0.454 | 0.541 | 0.007 | 0.669 | -0.003 | 0.753 | 0.497 | |
CSIL_ZC | M | last | 0.518 | 0.517 | 0.489 | 0.457 | 0.474 | 0.560 | 0.065 | 0.000 | 0.040 | 0.000 | 0.517 | |
CSI_ZC | M | last | 0.524 | 0.519 | 0.443 | 0.457 | 0.478 | 0.560 | 0.048 | 0.003 | 0.032 | 0.003 | 0.519 | |
CSL_ZC | M | last | 0.508 | 0.516 | 0.592 | 0.456 | 0.469 | 0.563 | 0.048 | 0.003 | 0.033 | 0.002 | 0.516 |
dix = dict_fxeq
srrx = dix["srr"]
srrx.accuracy_bars(
type="years",
title="Accuracy of monthly predictions of FX forward returns for 26 EM and DM currencies",
size=(14, 6),
)
Naive PnL #
dix = dict_fxeq
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
naive_pnl = msn.NaivePnL(
dfx,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
blacklist=blax,
bms=["USD_EQXR_NSA"],
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="zn_score_pan",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_PZN",
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="binary",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_BIN",
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
dix = dict_fxeq
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + x for x in ["_PZN", "_BIN"]] + ["Long only"]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_fxeq
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + "_PZN"] + ["Long only"]
dict_labels={"CSGIL_ZC_PZN":"based on directional cyclical strength z-score",
"Long only": "always long FX versus equity"}
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title="FX forward versus equity index future PnL across 17 currency areas, outright signal",
xcat_labels=dict_labels,
ylab="% of risk capital, for 10% annualized long-term vol, no compounding",
figsize=(16, 8),
)
dix = dict_fxeq
sigx = dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + "_PZN" for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_fxeq
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN", "_BIN"]]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | USD_EQXR_NSA correl | Traded Months | |
---|---|---|---|---|---|---|---|---|
xcat | ||||||||
CSGIL_ZC_BIN | 5.283309 | 10.0 | 0.528331 | 0.767269 | -12.293703 | -19.034662 | 0.0007 | 280 |
CSGIL_ZC_PZN | 5.367966 | 10.0 | 0.536797 | 0.79032 | -15.336726 | -17.452163 | 0.033073 | 280 |
CSGI_ZC_BIN | 3.885786 | 10.0 | 0.388579 | 0.552558 | -14.941223 | -19.292765 | 0.065014 | 280 |
CSGI_ZC_PZN | 4.818799 | 10.0 | 0.48188 | 0.695431 | -15.103592 | -19.863655 | 0.11016 | 280 |
CSGL_ZC_BIN | 1.096381 | 10.0 | 0.109638 | 0.158254 | -12.531497 | -25.665593 | -0.010587 | 280 |
CSGL_ZC_PZN | 3.313047 | 10.0 | 0.331305 | 0.491697 | -16.412582 | -18.958126 | 0.03326 | 280 |
CSG_ZC_BIN | 0.550684 | 10.0 | 0.055068 | 0.078034 | -17.463976 | -18.434326 | 0.122043 | 280 |
CSG_ZC_PZN | 1.827231 | 10.0 | 0.182723 | 0.267927 | -16.019184 | -22.533842 | 0.151164 | 280 |
CSIL_ZC_BIN | 5.360158 | 10.0 | 0.536016 | 0.777836 | -14.117467 | -17.202702 | -0.028331 | 280 |
CSIL_ZC_PZN | 6.181478 | 10.0 | 0.618148 | 0.89962 | -17.724494 | -20.125442 | -0.051787 | 280 |
CSI_ZC_BIN | 5.6287 | 10.0 | 0.56287 | 0.789878 | -13.539157 | -14.353216 | 0.071348 | 280 |
CSI_ZC_PZN | 4.9923 | 10.0 | 0.49923 | 0.70495 | -18.399523 | -19.66319 | 0.026837 | 280 |
CSL_ZC_BIN | 2.409914 | 10.0 | 0.240991 | 0.351002 | -13.386701 | -19.341517 | -0.178899 | 280 |
CSL_ZC_PZN | 4.244756 | 10.0 | 0.424476 | 0.628255 | -11.485402 | -16.487352 | -0.137296 | 280 |
dix = dict_fxeq
sig = dix["sig"]
naive_pnl = dix["pnls"]
naive_pnl.signal_heatmap(
pnl_name=sig + "_PZN", freq="m", start="2000-01-01", figsize=(16, 8)
)
FX versus equity strategy (relative features) #
Specs and panel test #
sigs = cs_rel
ms = "CSGILvBM_ZC" # main signal
oths = list(set(sigs) - set([ms])) # other signals
targ = "FXvEQXR"
cidx = msm.common_cids(dfx, sigs + [targ])
cidx = list(set(cidx_fxeq) & set(cidx))
dict_fxeq_rf = {
"sig": ms,
"rivs": oths,
"targ": targ,
"cidx": cidx,
"black": fxblack,
"srr": None,
"pnls": None,
}
dix = dict_fxeq_rf
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
crx = msp.CategoryRelations(
dfx,
xcats=[sig, targ],
cids=cidx,
freq="Q", # quarterly frequency allows for policy inertia
lag=1,
xcat_aggs=["last", "sum"],
start="2000-01-01",
blacklist=blax,
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
# separator=2011,
# xlab="",
# ylab="",
# title="",
size=(10, 6),
prob_est="map",
)
Accuracy and correlation check #
dix = dict_fxeq_rf
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
srr = mss.SignalReturnRelations(
dfx,
cids=cidx,
sigs=[sig] + rivs,
rets=targ,
freqs="M",
start="2000-01-01",
blacklist=blax,
)
dix["srr"] = srr
dix = dict_fxeq_rf
srrx = dix["srr"]
display(srrx.summary_table().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | |
---|---|---|---|---|---|---|---|---|---|---|---|
M: CSGILvBM_ZC/last => FXvEQXR | 0.524 | 0.517 | 0.410 | 0.457 | 0.476 | 0.557 | 0.080 | 0.000 | 0.042 | 0.000 | 0.516 |
Mean years | 0.523 | 0.506 | 0.422 | 0.456 | 0.463 | 0.548 | 0.049 | 0.398 | 0.019 | 0.481 | 0.506 |
Positive ratio | 0.625 | 0.583 | 0.417 | 0.250 | 0.333 | 0.750 | 0.667 | 0.417 | 0.417 | 0.375 | 0.583 |
Mean cids | 0.522 | 0.509 | 0.421 | 0.459 | 0.469 | 0.550 | 0.058 | 0.345 | 0.029 | 0.400 | 0.509 |
Positive ratio | 0.733 | 0.667 | 0.267 | 0.133 | 0.200 | 0.867 | 0.733 | 0.467 | 0.667 | 0.467 | 0.667 |
dix = dict_fxeq_rf
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Return | Signal | Frequency | Aggregation | |||||||||||
FXvEQXR | CSGILvBM_ZC | M | last | 0.524 | 0.517 | 0.410 | 0.457 | 0.476 | 0.557 | 0.080 | 0.000 | 0.042 | 0.000 | 0.516 |
CSGIvBM_ZC | M | last | 0.507 | 0.502 | 0.446 | 0.456 | 0.458 | 0.546 | 0.059 | 0.000 | 0.026 | 0.019 | 0.502 | |
CSGLvBM_ZC | M | last | 0.520 | 0.514 | 0.437 | 0.457 | 0.473 | 0.556 | 0.063 | 0.000 | 0.040 | 0.000 | 0.514 | |
CSGvBM_ZC | M | last | 0.505 | 0.502 | 0.468 | 0.456 | 0.459 | 0.546 | 0.030 | 0.071 | 0.013 | 0.244 | 0.502 | |
CSILvBM_ZC | M | last | 0.523 | 0.517 | 0.427 | 0.457 | 0.476 | 0.558 | 0.084 | 0.000 | 0.050 | 0.000 | 0.517 | |
CSIvBM_ZC | M | last | 0.517 | 0.512 | 0.441 | 0.454 | 0.467 | 0.557 | 0.054 | 0.002 | 0.033 | 0.004 | 0.512 | |
CSLvBM_ZC | M | last | 0.532 | 0.527 | 0.440 | 0.456 | 0.486 | 0.568 | 0.068 | 0.000 | 0.045 | 0.000 | 0.526 |
dix = dict_fxeq_rf
srrx = dix["srr"]
srrx.accuracy_bars(
type="years",
title="Accuracy of monthly predictions of FX forward returns for 26 EM and DM currencies",
size=(14, 6),
)
Naive PnL #
dix = dict_fxeq_rf
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
naive_pnl = msn.NaivePnL(
dfx,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
blacklist=blax,
bms=["USD_EQXR_NSA"],
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="zn_score_pan",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_PZN",
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="binary",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_BIN",
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
dix = dict_fxeq_rf
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + x for x in ["_PZN", "_BIN"]] + ["Long only"]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_fxeq_rf
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + "_PZN"] + ["Long only"]
dict_labels={"CSGILvBM_ZC_PZN": "based on directional cyclical strength z-score",
"Long only": "always long FX versus equity"}
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title="FX forward versus equity index future PnL across 17 currency areas, relative signal",
xcat_labels=dict_labels,
ylab="% of risk capital, for 10% annualized long-term vol, no compounding",
figsize=(16, 8),
)
dix = dict_fxeq_rf
sigx = dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + "_PZN" for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_fxeq_rf
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN", "_BIN"]]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | USD_EQXR_NSA correl | Traded Months | |
---|---|---|---|---|---|---|---|---|
xcat | ||||||||
CSGILvBM_ZC_BIN | 6.988237 | 10.0 | 0.698824 | 1.002847 | -14.192158 | -17.205187 | 0.069719 | 280 |
CSGILvBM_ZC_PZN | 8.463854 | 10.0 | 0.846385 | 1.219772 | -13.046622 | -14.924857 | 0.051074 | 280 |
CSGIvBM_ZC_BIN | 2.482503 | 10.0 | 0.24825 | 0.348725 | -12.776175 | -21.217493 | 0.061277 | 280 |
CSGIvBM_ZC_PZN | 6.452313 | 10.0 | 0.645231 | 0.931053 | -9.994317 | -20.118814 | 0.086924 | 280 |
CSGLvBM_ZC_BIN | 6.663084 | 10.0 | 0.666308 | 0.954057 | -13.767228 | -22.017301 | 0.008182 | 280 |
CSGLvBM_ZC_PZN | 7.295042 | 10.0 | 0.729504 | 1.061778 | -12.284544 | -22.722754 | 0.001844 | 280 |
CSGvBM_ZC_BIN | 4.799983 | 10.0 | 0.479998 | 0.701991 | -12.082934 | -20.711447 | -0.013233 | 280 |
CSGvBM_ZC_PZN | 3.648122 | 10.0 | 0.364812 | 0.523346 | -12.610609 | -28.692497 | 0.023614 | 280 |
CSILvBM_ZC_BIN | 6.642967 | 10.0 | 0.664297 | 0.943181 | -14.345332 | -17.18584 | 0.085824 | 280 |
CSILvBM_ZC_PZN | 8.736828 | 10.0 | 0.873683 | 1.26107 | -14.984349 | -11.526428 | 0.043571 | 280 |
CSIvBM_ZC_BIN | 3.924273 | 10.0 | 0.392427 | 0.543107 | -16.078086 | -30.639351 | 0.098115 | 280 |
CSIvBM_ZC_PZN | 6.185239 | 10.0 | 0.618524 | 0.885723 | -13.543817 | -25.921767 | 0.083992 | 280 |
CSLvBM_ZC_BIN | 8.042552 | 10.0 | 0.804255 | 1.166361 | -14.077094 | -13.209141 | 0.033006 | 280 |
CSLvBM_ZC_PZN | 7.212787 | 10.0 | 0.721279 | 1.052855 | -14.096004 | -18.834207 | -0.023676 | 280 |
dix = dict_fxeq_rf
sig = dix["sig"]
naive_pnl = dix["pnls"]
naive_pnl.signal_heatmap(
pnl_name=sig + "_PZN", freq="m", start="2000-01-01", figsize=(16, 5)
)
FX versus IRS strategy (relative features) #
Specs and panel test #
sigs = cs_rel
ms = "CSGILvBM_ZC" # main signal
oths = list(set(sigs) - set([ms])) # other signals
targ = "FXvDU05XR"
cidx = msm.common_cids(dfx, sigs + [targ])
cidx = list(set(cidx_fxdu) & set(cidx))
dict_fxdu_rf = {
"sig": ms,
"rivs": oths,
"targ": targ,
"cidx": cidx,
"black": fxblack,
"srr": None,
"pnls": None,
}
dix = dict_fxdu_rf
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
crx = msp.CategoryRelations(
dfx,
xcats=[sig, targ],
cids=cidx,
freq="Q", # quarterly frequency allows for policy inertia
lag=1,
xcat_aggs=["last", "sum"],
start="2000-01-01",
blacklist=blax,
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Cyclical strength composite score versus benchmark currency area, end of quarter",
ylab="FX foward return versus 5-year IRS return, volatility neutral, next quarter",
title="Relative cyclical strength and subsequent FX versus IRS returns, 2000-2023 (Apr)",
size=(10, 6),
prob_est="map",
)
Accuracy and correlation check #
dix = dict_fxdu_rf
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
srr = mss.SignalReturnRelations(
dfx,
cids=cidx,
sigs=[sig] + rivs,
rets=targ,
freqs="M",
start="2000-01-01",
blacklist=blax,
)
dix["srr"] = srr
dix = dict_fxdu_rf
srrx = dix["srr"]
display(srrx.summary_table().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | |
---|---|---|---|---|---|---|---|---|---|---|---|
M: CSGILvBM_ZC/last => FXvDU05XR | 0.522 | 0.522 | 0.440 | 0.501 | 0.526 | 0.518 | 0.052 | 0.000 | 0.031 | 0.001 | 0.522 |
Mean years | 0.522 | 0.519 | 0.441 | 0.492 | 0.506 | 0.532 | 0.060 | 0.432 | 0.037 | 0.405 | 0.517 |
Positive ratio | 0.542 | 0.708 | 0.458 | 0.417 | 0.458 | 0.625 | 0.792 | 0.500 | 0.625 | 0.458 | 0.708 |
Mean cids | 0.522 | 0.521 | 0.445 | 0.503 | 0.527 | 0.515 | 0.031 | 0.439 | 0.025 | 0.401 | 0.519 |
Positive ratio | 0.682 | 0.727 | 0.364 | 0.636 | 0.682 | 0.591 | 0.545 | 0.409 | 0.636 | 0.409 | 0.727 |
dix = dict_fxdu_rf
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Return | Signal | Frequency | Aggregation | |||||||||||
FXvDU05XR | CSGILvBM_ZC | M | last | 0.522 | 0.522 | 0.440 | 0.501 | 0.526 | 0.518 | 0.052 | 0.000 | 0.031 | 0.001 | 0.522 |
CSGIvBM_ZC | M | last | 0.506 | 0.506 | 0.464 | 0.502 | 0.509 | 0.504 | 0.044 | 0.002 | 0.021 | 0.028 | 0.506 | |
CSGLvBM_ZC | M | last | 0.517 | 0.517 | 0.466 | 0.501 | 0.519 | 0.515 | 0.039 | 0.006 | 0.025 | 0.009 | 0.517 | |
CSGvBM_ZC | M | last | 0.506 | 0.506 | 0.472 | 0.502 | 0.509 | 0.503 | 0.023 | 0.113 | 0.008 | 0.394 | 0.506 | |
CSILvBM_ZC | M | last | 0.518 | 0.518 | 0.450 | 0.499 | 0.519 | 0.517 | 0.053 | 0.000 | 0.035 | 0.000 | 0.518 | |
CSIvBM_ZC | M | last | 0.509 | 0.509 | 0.458 | 0.500 | 0.510 | 0.508 | 0.033 | 0.021 | 0.023 | 0.017 | 0.509 | |
CSLvBM_ZC | M | last | 0.521 | 0.521 | 0.446 | 0.498 | 0.522 | 0.520 | 0.041 | 0.004 | 0.028 | 0.003 | 0.521 |
dix = dict_fxdu_rf
srrx = dix["srr"]
srrx.accuracy_bars(
type="years",
# title="Accuracy of monthly predictions of FX forward returns for 26 EM and DM currencies",
size=(14, 6),
)
Naive PnL #
dix = dict_fxdu_rf
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
naive_pnl = msn.NaivePnL(
dfx,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
blacklist=blax,
bms=["USD_EQXR_NSA"],
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="zn_score_pan",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_PZN",
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="binary",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_BIN",
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
dix = dict_fxdu_rf
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + x for x in ["_PZN", "_BIN"]] + ["Long only"]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_fxdu_rf
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + "_PZN"] + ["Long only"]
dict_labels={"CSGILvBM_ZC_PZN": "based on cyclical strength z-score",
"Long only": "always long FX forward and paying 5-year IRS yields"}
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title="FX versus duration PnL across 23 markets",
xcat_labels=dict_labels,
ylab="% of risk capital, for 10% annualized long-term vol, no compounding",
figsize=(16, 8),
)
dix = dict_fxdu_rf
sigx = dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + "_PZN" for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_fxdu_rf
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN", "_BIN"]]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | USD_EQXR_NSA correl | Traded Months | |
---|---|---|---|---|---|---|---|---|
xcat | ||||||||
CSGILvBM_ZC_BIN | 6.801462 | 10.0 | 0.680146 | 1.010541 | -15.749766 | -38.918015 | 0.035471 | 280 |
CSGILvBM_ZC_PZN | 5.621181 | 10.0 | 0.562118 | 0.831619 | -22.648341 | -57.545365 | 0.061404 | 280 |
CSGIvBM_ZC_BIN | 2.097494 | 10.0 | 0.209749 | 0.296659 | -16.227387 | -38.787377 | 0.027086 | 280 |
CSGIvBM_ZC_PZN | 4.936312 | 10.0 | 0.493631 | 0.725186 | -23.985285 | -53.145022 | 0.025661 | 280 |
CSGLvBM_ZC_BIN | 4.167141 | 10.0 | 0.416714 | 0.611648 | -23.261691 | -41.071327 | 0.048548 | 280 |
CSGLvBM_ZC_PZN | 4.411676 | 10.0 | 0.441168 | 0.657154 | -20.769511 | -44.757449 | 0.04942 | 280 |
CSGvBM_ZC_BIN | 3.208237 | 10.0 | 0.320824 | 0.469048 | -12.445675 | -20.095153 | -0.022655 | 280 |
CSGvBM_ZC_PZN | 2.99718 | 10.0 | 0.299718 | 0.441816 | -19.031596 | -35.674721 | -0.015141 | 280 |
CSILvBM_ZC_BIN | 5.000921 | 10.0 | 0.500092 | 0.725556 | -18.948527 | -40.133046 | 0.063345 | 280 |
CSILvBM_ZC_PZN | 5.432433 | 10.0 | 0.543243 | 0.785428 | -22.12334 | -55.306638 | 0.083733 | 280 |
CSIvBM_ZC_BIN | 2.049888 | 10.0 | 0.204989 | 0.288298 | -18.991353 | -33.279101 | 0.028674 | 280 |
CSIvBM_ZC_PZN | 4.019545 | 10.0 | 0.401955 | 0.575009 | -20.674494 | -48.118809 | 0.04901 | 280 |
CSLvBM_ZC_BIN | 5.15556 | 10.0 | 0.515556 | 0.754656 | -17.697169 | -44.268446 | 0.051962 | 280 |
CSLvBM_ZC_PZN | 4.133127 | 10.0 | 0.413313 | 0.589554 | -23.678345 | -57.415446 | 0.089347 | 280 |
dix = dict_fxdu_rf
sig = dix["sig"]
naive_pnl = dix["pnls"]
naive_pnl.signal_heatmap(
pnl_name=sig + "_PZN", freq="m", start="2000-01-01", figsize=(16, 6)
)
IRS curve flattening strategy #
Specs and panel test #
sigs = cs_dir
ms = "CSGIL_ZC" # main signal
oths = list(set(sigs) - set([ms])) # other signals
targ = "DU05v02XR"
cidx = msm.common_cids(dfx, sigs + [targ])
cidx = list(set(cidx_du52) & set(cidx))
dict_du52 = {
"sig": ms,
"rivs": oths,
"targ": targ,
"cidx": cidx,
"black": dublack,
"srr": None,
"pnls": None,
}
dix = dict_du52
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
crx = msp.CategoryRelations(
dfx,
xcats=[sig, targ],
cids=cidx,
freq="Q",
lag=1,
xcat_aggs=["last", "sum"],
start="2000-01-01",
blacklist=blax,
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Cyclical strength composite score, end of quarter",
ylab="IRS curve 2s-5s flattening return next quarter",
title="Cyclical strength and subsequent IRS flattening returns, 2000-2023 (Apr)",
size=(10, 6),
prob_est="map",
)
Accuracy and correlation check #
dix = dict_du52
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
srr = mss.SignalReturnRelations(
dfx,
cids=cidx,
sigs=[sig] + rivs,
rets=targ,
freqs="M",
start="2000-01-01",
blacklist=blax,
)
dix["srr"] = srr
dix = dict_du52
srrx = dix["srr"]
display(srrx.summary_table().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | |
---|---|---|---|---|---|---|---|---|---|---|---|
M: CSGIL_ZC/last => DU05v02XR | 0.537 | 0.537 | 0.496 | 0.528 | 0.565 | 0.508 | 0.097 | 0.000 | 0.056 | 0.000 | 0.537 |
Mean years | 0.537 | 0.517 | 0.513 | 0.531 | 0.544 | 0.490 | 0.022 | 0.365 | 0.022 | 0.413 | 0.515 |
Positive ratio | 0.750 | 0.625 | 0.542 | 0.667 | 0.750 | 0.458 | 0.625 | 0.417 | 0.542 | 0.417 | 0.625 |
Mean cids | 0.536 | 0.540 | 0.498 | 0.526 | 0.563 | 0.517 | 0.107 | 0.264 | 0.063 | 0.290 | 0.539 |
Positive ratio | 0.880 | 0.880 | 0.560 | 0.760 | 0.920 | 0.560 | 0.920 | 0.800 | 0.920 | 0.720 | 0.880 |
dix = dict_du52
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | auc | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Return | Signal | Frequency | Aggregation | |||||||||||
DU05v02XR | CSGIL_ZC | M | last | 0.537 | 0.537 | 0.496 | 0.528 | 0.565 | 0.508 | 0.097 | 0.000 | 0.056 | 0.000 | 0.537 |
CSGI_ZC | M | last | 0.536 | 0.540 | 0.436 | 0.528 | 0.573 | 0.507 | 0.095 | 0.000 | 0.062 | 0.000 | 0.540 | |
CSGL_ZC | M | last | 0.535 | 0.533 | 0.536 | 0.528 | 0.559 | 0.507 | 0.099 | 0.000 | 0.056 | 0.000 | 0.533 | |
CSG_ZC | M | last | 0.524 | 0.527 | 0.462 | 0.528 | 0.556 | 0.497 | 0.096 | 0.000 | 0.060 | 0.000 | 0.527 | |
CSIL_ZC | M | last | 0.524 | 0.524 | 0.506 | 0.529 | 0.552 | 0.495 | 0.062 | 0.000 | 0.033 | 0.000 | 0.524 | |
CSI_ZC | M | last | 0.515 | 0.519 | 0.451 | 0.530 | 0.550 | 0.487 | 0.039 | 0.003 | 0.025 | 0.004 | 0.518 | |
CSL_ZC | M | last | 0.534 | 0.528 | 0.617 | 0.529 | 0.551 | 0.506 | 0.064 | 0.000 | 0.033 | 0.000 | 0.527 |
dix = dict_du52
srrx = dix["srr"]
srrx.accuracy_bars(
type="years",
# title="",
size=(14, 6),
)
Naive PnL #
dix = dict_du52
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
blax = dix["black"]
naive_pnl = msn.NaivePnL(
dfx,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
blacklist=blax,
bms=["USD_EQXR_NSA", "USD_DU05YXR_VT10"],
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="zn_score_pan",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_PZN",
)
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_op="binary",
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_BIN",
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
dix = dict_du52
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + x for x in ["_PZN", "_BIN"]] + ["Long only"]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_du52
sigx = dix["sig"]
naive_pnl = dix["pnls"]
pnls = [sigx + "_PZN"] + ["Long only"]
dict_labels={"CSGIL_ZC_PZN": "based on negative of cyclical strength z-score",
"Long only": "always long 5-year versus 2-year, volatility neutral"}
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title="IRS curve flattening PnL across 25 markets",
xcat_labels=dict_labels,
ylab="% of risk capital, for 10% annualized long-term vol, no compounding",
figsize=(16, 8),
)
dix = dict_du52
sigx = dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + "_PZN" for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
title=None,
xcat_labels=None,
figsize=(16, 8),
)
dix = dict_du52
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN", "_BIN"]]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | USD_EQXR_NSA correl | USD_DU05YXR_VT10 correl | Traded Months | |
---|---|---|---|---|---|---|---|---|---|
xcat | |||||||||
CSGIL_ZC_BIN | 8.609747 | 10.0 | 0.860975 | 1.358165 | -13.277343 | -22.409924 | 0.007188 | -0.069903 | 280 |
CSGIL_ZC_PZN | 9.768589 | 10.0 | 0.976859 | 1.685033 | -12.188346 | -16.452247 | 0.011289 | -0.073394 | 280 |
CSGI_ZC_BIN | 9.145348 | 10.0 | 0.914535 | 1.448942 | -13.084411 | -20.10358 | 0.00292 | -0.129063 | 280 |
CSGI_ZC_PZN | 9.035923 | 10.0 | 0.903592 | 1.532068 | -14.12981 | -15.165435 | 0.016119 | -0.107721 | 280 |
CSGL_ZC_BIN | 8.880915 | 10.0 | 0.888092 | 1.344883 | -20.627857 | -20.134156 | 0.040401 | 0.031881 | 280 |
CSGL_ZC_PZN | 10.271108 | 10.0 | 1.027111 | 1.76573 | -14.635985 | -14.896181 | 0.010697 | -0.020946 | 280 |
CSG_ZC_BIN | 7.556902 | 10.0 | 0.75569 | 1.175785 | -18.766272 | -16.532623 | 0.013415 | -0.017466 | 280 |
CSG_ZC_PZN | 9.960895 | 10.0 | 0.99609 | 1.705182 | -17.748151 | -13.122323 | 0.017344 | -0.053538 | 280 |
CSIL_ZC_BIN | 6.149633 | 10.0 | 0.614963 | 0.954377 | -16.741566 | -31.757231 | 0.001303 | -0.103185 | 280 |
CSIL_ZC_PZN | 7.516372 | 10.0 | 0.751637 | 1.22116 | -16.982597 | -30.459333 | 0.003466 | -0.081338 | 280 |
CSI_ZC_BIN | 4.21796 | 10.0 | 0.421796 | 0.636606 | -16.967601 | -32.913727 | 0.012361 | -0.163752 | 280 |
CSI_ZC_PZN | 4.448028 | 10.0 | 0.444803 | 0.681406 | -18.468483 | -35.31808 | 0.007218 | -0.126911 | 280 |
CSL_ZC_BIN | 8.036301 | 10.0 | 0.80363 | 1.223383 | -15.237968 | -20.420931 | -0.008401 | 0.066593 | 280 |
CSL_ZC_PZN | 8.786447 | 10.0 | 0.878645 | 1.427156 | -17.92247 | -21.213138 | -0.004346 | 0.031581 | 280 |
dix = dict_du52
sig = dix["sig"]
naive_pnl = dix["pnls"]
naive_pnl.signal_heatmap(
pnl_name=sig + "_PZN", freq="m", start="2000-01-01", figsize=(16, 6)
)