Trading strategies with JPMaQS #

In this research notebook, we will explore how the JPMaQS data can be effectively used to construct and test trading strategies in conjunction with the macrosynergy package. The notebook demonstrates how to create a simple macro strategy, which uses customized indicators, such as excess inflation or excess credit growth, backtest it and evaluate its profitability. Throughout the notebook, we will extensively leverage the macrosynergy package, particularly for preliminary data analysis, signal evaluation, and (naive) profit and loss (PnL) calculations. This notebook is the first one in a series of notebooks dedicated to various types of trading strategies aimed to make a profit by implementing strategies based on a simple macroeconomic theory using quantamental indicators.

The notebook covers the three main parts:

  • Get Packages and JPMaQS Data: This section is responsible for installing and importing the necessary Python packages that are used throughout the analysis.

  • Transformations and Checks: In this part, the notebook performs calculations and transformations on the data to derive the relevant signals and targets used for the analysis, including the normalization of feature variables using z-score or building a simple linear composite indicator.

  • Value Checks: This is the most critical section, where the notebook calculates and implements the trading strategies based on plausible hypotheses. This section involves backtesting a few simple but powerful trading strategies targeting. In particular, this notebook investigates how quantamental indicators, such as excess growth, inflation, credit growth, etc. can help generate trading signals for financial assets. In particular, this notebook looks into 4 simple macro strategies targeting subsequent monthly (or quarterly) vol targeted 2 and 5-year interest rate swap receiver returns:

    • G2 directional macro trend : analysis of predictive power of macro trend pressure on subsequent interest rate swap receiver returns (on vol-targeted positions) in the two large currency areas - USD and EUR

    • non-G2 directional macro trend : simple directional macro trend pressure and subsequent IRS receiver returns (on vol-targeted positions) in other currency areas

    • non-G2 relative to G2 macro trend : predictive power of macro trends of the smaller countries relative to the G2 on vol-targeted IRS returns relative to similar returns in the G2

    • global relative pressure factors : checks the value of relative macro trends for each country versus an average of all other (available and tradable) countries on vol-targeted IRS returns relative to all available countries.

Some countless other possible indicators and approaches can be explored by users using even the limited free dataset. Users can modify the code to test different hypotheses and strategies based on their research and ideas. Best of luck with your research!

1. Get packages and JPMaQS data #

This notebook primarily relies on the standard packages available in the Python data science stack. However, there is an additional package macrosynergy that is required for two purposes:

  • Downloading JPMaQS data: The macrosynergy package facilitates the retrieval of JPMaQS data, which is used in the notebook.

  • For the analysis of quantamental data and value propositions: The macrosynergy package provides functionality for performing quick analyses of quantamental data and exploring value propositions.

For detailed information and a comprehensive understanding of the macrosynergy package and its functionalities, please refer to the “Introduction to Macrosynergy package” notebook on the Macrosynergy Quantamental Academy or visit the following link on Kaggle .

# Uncomment below if running on Kaggle
"""
%%capture
! pip install macrosynergy --upgrade"""
'\n%%capture\n! pip install macrosynergy --upgrade'

1.1. Import packages and download JPMaQS #

import numpy as np
import pandas as pd
import os

import macrosynergy.management as msm
import macrosynergy.panel as msp
import macrosynergy.signal as mss
import macrosynergy.pnl as msn
import macrosynergy.visuals as msv

from macrosynergy.download import JPMaQSDownload

import warnings

warnings.simplefilter("ignore")

The JPMaQS indicators we consider are downloaded using the J.P. Morgan Dataquery API interface within the macrosynergy package. This is done by specifying ticker strings, formed by appending an indicator category code to a currency area code <cross_section>. These constitute the main part of a full quantamental indicator ticker, taking the form DB(JPMAQS,<cross_section>_<category>,<info>) , where denotes the time series of information for the given cross-section and category. The following types of information are available:

value giving the latest available values for the indicator eop_lag referring to days elapsed since the end of the observation period mop_lag referring to the number of days elapsed since the mean observation period grade denoting a grade of the observation, giving a metric of real time information quality.

After instantiating the JPMaQSDownload class within the macrosynergy.download module, one can use the download(tickers,start_date,metrics) method to easily download the necessary data, where tickers is an array of ticker strings, start_date is the first collection date to be considered and metrics is an array comprising the times series information to be downloaded. For more information see here or use the free dataset on Kaggle

To ensure reproducibility, only samples between January 2000 (inclusive) and November 2023 (exclusive) are considered.

# Cross-sections of interest

cids_dm = ["AUD", "CAD", "CHF", "EUR", "GBP", "JPY", "NOK", "NZD", "SEK", "USD"]
cids_em = [
    "CLP",
    "COP",
    "CZK",
    "HUF",
    "IDR",
    "ILS",
    "INR",
    "KRW",
    "MXN",
    "PLN",
    "THB",
    "TRY",
    "TWD",
    "ZAR",
]
cids = cids_dm + cids_em
cids_du = cids_dm + cids_em
cids_dux = list(set(cids_du) - set(["IDR", "NZD"]))
cids_xg2 = list(set(cids_dux) - set(["EUR", "USD"]))
# Quantamental categories of interest

ecos = [
    "CPIC_SA_P1M1ML12",
    "CPIC_SJA_P3M3ML3AR",
    "CPIC_SJA_P6M6ML6AR",
    "CPIH_SA_P1M1ML12",
    "CPIH_SJA_P3M3ML3AR",
    "CPIH_SJA_P6M6ML6AR",
    "INFTEFF_NSA",
    "INTRGDP_NSA_P1M1ML12_3MMA",
    "INTRGDPv5Y_NSA_P1M1ML12_3MMA",
    "PCREDITGDP_SJA_D1M1ML12",
    "RGDP_SA_P1Q1QL4_20QMA",
    "PCREDITBN_SJA_P1M1ML12",
]
mkts = [
    "DU02YXR_NSA",
    "DU05YXR_NSA",
    "DU02YXR_VT10",
    "DU05YXR_VT10",
    "FXTARGETED_NSA",
    "FXUNTRADABLE_NSA",
    "EQXR_NSA", # to use as a benchmark
    "GB10YXR_NSA" # to use as a benchmark
]



xcats = ecos + mkts

The description of each JPMaQS category is available either under Macro Quantamental Academy , JPMorgan Markets (password protected), or on Kaggle (just for the tickers used in this notebook). In particular, the set used for this notebook is using Consumer price inflation trends , Inflation targets , Intuitive growth estimates , Domestic credit ratios , Long-term GDP growth , Private credit expansion , Duration returns , and FX tradeability and flexibility

# Download series from J.P. Morgan DataQuery by tickers

tickers = [cid + "_" + xcat for cid in cids for xcat in xcats]
print(f"Maximum number of tickers is {len(tickers)}")

# Retrieve credentials

client_id: str = os.getenv("DQ_CLIENT_ID")
client_secret: str = os.getenv("DQ_CLIENT_SECRET")

proxy = {
    # "https": "https://example.com:port",
}

with JPMaQSDownload(
    client_id=client_id,
    client_secret=client_secret,
    proxy=proxy,
) as dq:
    df = dq.download(
        tickers=tickers,
        start_date="2000-01-01",
        suppress_warning=True,
        metrics=["value"],
        show_progress=True,
    )
Maximum number of tickers is 480
Downloading data from JPMaQS.
Timestamp UTC:  2024-05-16 15:53:48
Connection successful!
Requesting data: 100%|██████████| 24/24 [00:04<00:00,  4.92it/s]
Downloading data: 100%|██████████| 24/24 [00:31<00:00,  1.32s/it]
Some expressions are missing from the downloaded data. Check logger output for complete list.
29 out of 480 expressions are missing. To download the catalogue of all available expressions and filter the unavailable expressions, set `get_catalogue=True` in the call to `JPMaQSDownload.download()`.
Some dates are missing from the downloaded data. 
6361 out of 6361 dates are missing.
#  uncomment if running on Kaggle
"""for dirname, _, filenames in os.walk('/kaggle/input'):
    for filename in filenames:
        print(os.path.join(dirname, filename))
                                                   
df = pd.read_csv('../input/fixed-income-returns-and-macro-trends/JPMaQS_Quantamental_Indicators.csv', index_col=0, parse_dates=['real_date'])"""
"for dirname, _, filenames in os.walk('/kaggle/input'):\n    for filename in filenames:\n        print(os.path.join(dirname, filename))\n                                                   \ndf = pd.read_csv('../input/fixed-income-returns-and-macro-trends/JPMaQS_Quantamental_Indicators.csv', index_col=0, parse_dates=['real_date'])"

1.2. Availability #

It is essential to assess data availability before conducting any analysis. It allows identifying any potential gaps or limitations in the dataset, which can impact the validity and reliability of analysis and ensure that a sufficient number of observations for each selected category and cross-section is available and determining the appropriate periods for analysis.

The missing_in_df() function in macrosynergy.management allows the user to quickly check whether or not all requested categories have been downloaded.

msm.missing_in_df(df, xcats=xcats, cids=cids)
No missing XCATs across DataFrame.
Missing cids for CPIC_SA_P1M1ML12:              []
Missing cids for CPIC_SJA_P3M3ML3AR:            []
Missing cids for CPIC_SJA_P6M6ML6AR:            []
Missing cids for CPIH_SA_P1M1ML12:              []
Missing cids for CPIH_SJA_P3M3ML3AR:            []
Missing cids for CPIH_SJA_P6M6ML6AR:            []
Missing cids for DU02YXR_NSA:                   []
Missing cids for DU02YXR_VT10:                  []
Missing cids for DU05YXR_NSA:                   []
Missing cids for DU05YXR_VT10:                  []
Missing cids for EQXR_NSA:                      ['CLP', 'COP', 'CZK', 'HUF', 'IDR', 'ILS', 'NOK', 'NZD']
Missing cids for FXTARGETED_NSA:                ['USD']
Missing cids for FXUNTRADABLE_NSA:              ['USD']
Missing cids for GB10YXR_NSA:                   ['CAD', 'CHF', 'CLP', 'COP', 'CZK', 'EUR', 'HUF', 'IDR', 'ILS', 'INR', 'KRW', 'MXN', 'NOK', 'PLN', 'SEK', 'THB', 'TRY', 'TWD', 'ZAR']
Missing cids for INFTEFF_NSA:                   []
Missing cids for INTRGDP_NSA_P1M1ML12_3MMA:     []
Missing cids for INTRGDPv5Y_NSA_P1M1ML12_3MMA:  []
Missing cids for PCREDITBN_SJA_P1M1ML12:        []
Missing cids for PCREDITGDP_SJA_D1M1ML12:       []
Missing cids for RGDP_SA_P1Q1QL4_20QMA:         []

The check_availability() function in macrosynergy.management displays the start dates from which each category is available for each requested country, as well as missing dates or unavailable series.

# indicator availability
plot = msm.check_availability(
    df,
    xcats=ecos,
    cids=cids,
    start_size=(20, 7),
    start_years=True,
    missing_recent=False,
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/45af71386dbc58aff3089a07a4df1fe10bf4f57a37d80889d6e6f5bd32f282d7.png
# return availability
msm.check_availability(df, xcats=mkts, cids=cids, missing_recent=False)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/2dc08f12cde2318e8c740be5f82052c581e50ab749c3487b51d559b2b590e7d6.png

1.3. Exclude series sections with make_blacklist #

Before running the analysis, we use make_blacklist() helper function from macrosynergy package, which creates a standardized dictionary of blacklist periods, i.e. periods that affect the validity of an indicator, based on standardized panels of binary categories.

Put simply, this function allows converting category variables into blacklist dictionaries that can then be passed to other functions. Below, we picked two indicators for FX tradability and flexibility. FXTARGETED_NSA is an exchange rate target dummy, which takes a value of 1 if the exchange rate is targeted through a peg or any regime that significantly reduces exchange rate flexibility and 0 otherwise. FXUNTRADABLE_NSA is also a dummy variable that takes the value one if liquidity in the main FX forward market is limited or there is a distortion between tradable offshore and untradable onshore contracts.

dfb = df[df["xcat"].isin(["FXTARGETED_NSA", "FXUNTRADABLE_NSA"])].loc[
    :, ["cid", "xcat", "real_date", "value"]
]
dfba = (
    dfb.groupby(["cid", "real_date"])
    .aggregate(value=pd.NamedAgg(column="value", aggfunc="max"))
    .reset_index()
)
dfba["xcat"] = "FXBLACK"
fxblack = msp.make_blacklist(dfba, "FXBLACK")
fxblack
{'CHF': (Timestamp('2011-10-03 00:00:00'), Timestamp('2015-01-30 00:00:00')),
 'CZK': (Timestamp('2014-01-01 00:00:00'), Timestamp('2017-07-31 00:00:00')),
 'ILS': (Timestamp('2000-01-03 00:00:00'), Timestamp('2005-12-30 00:00:00')),
 'INR': (Timestamp('2000-01-03 00:00:00'), Timestamp('2004-12-31 00:00:00')),
 'THB': (Timestamp('2007-01-01 00:00:00'), Timestamp('2008-11-28 00:00:00')),
 'TRY_1': (Timestamp('2000-01-03 00:00:00'), Timestamp('2003-09-30 00:00:00')),
 'TRY_2': (Timestamp('2020-01-01 00:00:00'), Timestamp('2024-05-15 00:00:00'))}

2. Transformations and checks #

2.1. Features #

3.1.1. Excess growth #

Recent annual GDP growth rate trends versus 5-year medians are used as proxies for strong versus weak growth by local standards and are provided directly by JPMaQS. These excess growth indicators are less elaborate than GDP growth versus estimated potential growth but are more objective and less susceptible to look-ahead biases in model design. They are also a plausible intuitive proxy for how the public and policymakers perceive growth. The macrosynergy package provides two useful functions, view_ranges() and view_timelines() , which facilitate convenient data visualization for selected indicators and cross-sections. These functions assist in plotting means, standard deviations, and time series of the chosen indicators.

xcatx = ["INTRGDP_NSA_P1M1ML12_3MMA", "INTRGDPv5Y_NSA_P1M1ML12_3MMA"]
cidx = cids_dux
start_date = "2000-01-01"

msp.view_ranges(
    df,
    cids=cidx,
    xcats=xcatx,
    kind="bar",
    sort_cids_by="mean",
    ylab="% daily rate",
    start=start_date,
    title="Means and standard deviations of intuitive GDP growth trends, % over a year ago, 3-month moving average, since 2000",
    xcat_labels=[
        "Annual intuitive growth trend",
        "Excess intuitive real GDP growth trend, 5 year lookback",
    ],
)
msp.view_timelines(
    df,
    xcats=xcatx,
    cids=cids_dux,
    ncol=4,
    cumsum=False,
    start=start_date,
    same_y=False,
    all_xticks=True,
    title_xadj=0.43,
    title="Intuitive GDP growth trends, % over a year ago, 3-month moving average",
    xcat_labels=[
        "Annual intuitive growth trend, 3-month moving average",
        "Excess intuitive real GDP growth trend, 5 year lookback",
    ],
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/9b864101b1bf1968cf946a7e34ce1b0324204ab5e681aaccf2e6d0e6a4f8291d.png https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/aedfcdcfdf8004dd752e8368e50f5491954d75318a02800b96a0e59b3af3eaac.png

2.1.2. Excess inflation #

Most quantamental indicators that professional investment managers would deploy require careful customization and transformations of original JPMaQS data. Since all quantamental data are standardized information states over a range of countries, transformations are simple.

The panel_calculator() function in macrosynergy.panel makes it easy and intuitive to apply a wide range of transformations to each cross section of a panel by using a string. The main rules are:

  • Consider the category ticker as a symbolic representation of the respective panel i.e. time series dataframe of all cross sections of the category.

  • Use standard Python and pandas expressions to engineer new features.

See below for examples of these rules in action: we calculate plausible metrics of excess inflation versus a country’s effective inflation target. update_df() function adds the new indicators to the original dataframe df .

# Preparation: for relative target deviations, we need denominator bases that should never be less than 2

dfa = msp.panel_calculator(df, ["INFTEBASIS = INFTEFF_NSA.clip(lower=2)"], cids=cids_du)
df = msm.update_df(df, dfa)

# Calculate absolute and relative target deviations for a range of CPI inflation metrics

infs = [
    "CPIH_SA_P1M1ML12",
    "CPIH_SJA_P6M6ML6AR",
    "CPIH_SJA_P3M3ML3AR",
    "CPIC_SA_P1M1ML12",
    "CPIC_SJA_P6M6ML6AR",
    "CPIC_SJA_P3M3ML3AR",
]
for inf in infs:
    calcs = [
        f"{inf}vIET = ( {inf} - INFTEFF_NSA )",
        f"{inf}vIETR = ( {inf} - INFTEFF_NSA ) / INFTEBASIS",
    ]

    dfa = msp.panel_calculator(df, calcs=calcs, cids=cids_du)
    df = msm.update_df(df, dfa)

# Average excess inflation metrics across three different standard horizons

calcs = []
for cp in ["CPIH", "CPIC"]:
    for v in ["vIET", "vIETR"]:
        calc = f"{cp}_SA_PALL{v} = ( {cp}_SA_P1M1ML12{v} + {cp}_SJA_P6M6ML6AR{v} + {cp}_SJA_P3M3ML3AR{v} ) / 3"
        calcs += [calc]

dfa = msp.panel_calculator(df, calcs, cids=cids_du)
df = msm.update_df(df, dfa)

Annual and “6m/6m” seasonally and jump-adjusted inflation rates mainly display large and medium-term cycles. The short-term “3m/3m” seasonally and jump-adjusted rates are a lot more volatile. view_timelines() is employed to display the history.

xcatx = ["CPIH_SA_P1M1ML12vIET", "CPIH_SJA_P6M6ML6ARvIET", "CPIH_SJA_P3M3ML3ARvIET"]
cidx = cids_du

msp.view_timelines(
    df,
    xcats=xcatx,
    cids=cidx,
    ncol=4,
    cumsum=False,
    start=start_date,
    same_y=False,
    all_xticks=True,
    title="CPI inflation rates, %ar, versus effective inflation target, market information state",
    xcat_labels=["% over a year ago", "% 6m/6m, saar", "% 3m/3m, saar"],
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/aff1c4ac321e178fe3fb657eeeeac9b03c06b02acba295b0e7947628097697fc.png

The use of relative excess inflation makes time series comparable across high and low inflation countries.

xcatx = ["CPIH_SA_PALLvIET", "CPIH_SA_PALLvIETR"]
cidx = cids_dux

msp.view_ranges(
    df,
    cids=cidx,
    xcats=xcatx,
    kind="bar",
    sort_cids_by="mean",
    ylab="% daily rate",
    start="2000-01-01",
    title="Means and standard deviations of relative excess inflation, since 2000",
    xcat_labels=[
        "Absolute inflation target deviations",
        "Relative inflation target deviations",
    ],
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/381d9936356760bfcb47c18cd9fe3671883b72c10c396dd9f678d560572a07f2.png

2.1.3. Excess credit growth #

Similar to excess inflation, excess credit growth metrics require transformations and a neutral benchmark. A neutral benchmark here serves as a medium-term nominal GDP growth estimate, calculated as the sum of the past 5-years’ growth and the effective estimated inflation target. panel_calculator() and update_df() are employed to calculate the new indicators and add them to the original dataframe df .

dfa = msp.panel_calculator(
    df,
    ["PCBASIS = INFTEFF_NSA + RGDP_SA_P1Q1QL4_20QMA"],
    cids=cids_du,
)
df = msm.update_df(df, dfa)

pcgs = [
    "PCREDITBN_SJA_P1M1ML12",
    "PCREDITGDP_SJA_D1M1ML12",
]
for pcg in pcgs:
    calc_pcx = f"{pcg}vLTB = {pcg} - PCBASIS "
    dfa = msp.panel_calculator(df, calcs=[calc_pcx], cids=cids_du)
    df = msm.update_df(df, dfa)

Excess ratios based on expansion relative to GDP and a nominal GDP benchmark are not plausible metrics for the central bank because the initial leverage of the economy strongly affects the expansion rate and must plausibly be considered for the benchmark. Put economically, countries with bank leverage will always produce low rates versus a nominal GDP growth benchmark and part of that shortfall may reflect other channels of leverage outside the banking system, just as the credit markets in the U.S. The macrosynergy package provides two useful functions, view_ranges() and view_timelines() , which assist in plotting means, standard deviations, and time series of the chosen indicators.

xcatx = ["PCREDITBN_SJA_P1M1ML12vLTB", "PCREDITGDP_SJA_D1M1ML12vLTB"]

msp.view_ranges(
    df,
    cids=cids_dux,
    xcats=xcatx,
    kind="bar",
    sort_cids_by="mean",
    ylab="% daily rate",
    start="2000-01-01",
)
msp.view_timelines(
    df,
    xcats=xcatx,
    cids=cids_dux,
    ncol=4,
    cumsum=False,
    start="2000-01-01",
    same_y=False,
    all_xticks=True,
    title="Private credit growth, %oya, relative to the sum of inflation target and long-term growth, market information state",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/42d807d41f1fdf8baa92774ff285bb4f6bbbf1aec8fcf8b8caf25c88df904e8e.png https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/51832a47c7489b07572cdae89f3c7bcd751afb0e18f8279aa97ca43fdf69396d.png

2.1.4. Composite macro trend pressure #

For simple proof of concept, we can just add up excess growth, inflation and credit expansion, using the most common metrics. This gives a simple first-shot candidate for a trading signal. Whilst optimization may improve the information content, the empirical evidence for the “simplest plausible” indicators is often a more reliable and less biased indicator of the value proposition. panel_calculator() makes it easy and intuitive to apply the calculation to each cross-section.

The composite indicators currently used are simple averages, which means they are not refined to optimize their performance as trading signals. To enhance the quality of these signals, one could employ machine learning techniques to design more sophisticated indicators. For instance, a statistical learning approach could be implemented where an optimal method for feature selection is identified sequentially. This method involves evaluating different scoring techniques to determine which is most effective, and then applying this optimal method to select features at each recalibration date. The outcome of this process is that the optimal signal at each recalibration point is computed as an equally weighted average of the features deemed most effective by the chosen model. Therefore, rather than relying on a simple average, this approach refines the selection process and constructs a more targeted and potentially more effective indicator based on the best performing model recommendations up to that recalibration date. This approach was explained and tested in the post “Optimizing macro trading signals – A practical introduction” .

In this post, we focus solely on testing straightforward, non-optimized, and intuitively straightforward indicators:

  • XGHI the average of excess inflation and economic growth, %ar, end-of-month information state,

  • XGHIPC , the average of excess inflation, economic growth, and excess credit growth, %ar, end-of-month information state,

comparing them to their underlying components.

calcs = [
    "XGHI = ( INTRGDPv5Y_NSA_P1M1ML12_3MMA + CPIH_SA_PALLvIET ) / 2",
    "XGHIPC = ( INTRGDPv5Y_NSA_P1M1ML12_3MMA + CPIH_SA_PALLvIET + PCREDITBN_SJA_P1M1ML12vLTB ) / 3",
]

dfa = msp.panel_calculator(df, calcs, cids=cids_du)
df = msm.update_df(df, dfa)

The resulting composite macro trend pressure indicators "XGHI" and "XGHIPC" can be displayed using view_ranges() and view_timelines() :

xcatx = ["XGHI", "XGHIPC"]

msp.view_ranges(
    df,
    cids=cids_dux,
    xcats=xcatx,
    kind="bar",
    sort_cids_by="mean",
    ylab="% daily rate",
    start="2000-01-01",
)
msp.view_timelines(
    df,
    xcats=xcatx,
    cids=cids_dux,
    ncol=4,
    cumsum=False,
    start="2000-01-01",
    same_y=False,
    all_xticks=True,
    title="Composite macro trend pressure, % ar, in excess of benchmarks",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/75ed3e752c4408d187f09c538c4faa67fea55ab6e3607591f0c72e05a9a65402.png https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/116d0b345753b6380a4e787560dbfbf820012ea0000822c1c34f4333152af6ea.png

It is often important to assess if the strategy signal tends to give rise to a common direction of exposure across markets or rather to relative positions across currency areas.

The function correl_matrix() of the macrosynergy.panel module allows to quickly visualize the historic international correlation of the signal category.

In the case of composite excess macro trends, the correlation across currency areas has been predominantly positive, with very few exceptions for some pairs of emerging countries.

xcat = "XGHIPC"
msp.correl_matrix(
    df,
    xcats=xcat,
    cids=cids_dux,
    start="2000-01-01",
    cluster=True,
    title="International correlation of composite excess macro trends (growth, inflation and credit)",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/06a45402b8000c465f7de332d06691a546f1f6c1ba9771d87126c1dcdec1112c.png

2.1.5. Small country hybrid trend pressure #

Plausibly smaller countries policy and markets are affected by both their own macro trends and those in the dominant market, as the latter have a critical influence on global financial conditions and - therefore - on local conditions.

Here we account for this in a crude way, by average local and G2 excess macro trends for the smaller countries and assuming that the smaller countries do not affect the G2.

Note that the panel_calculator() function can integrate individual cross-section series in calculations by prefixing them with i . In the example below, the individual series is added to each cross section of the panel. update_df() is used to add the calculated indicators to the original dataframe df .

calcs = [
    "XGHI_SCH = ( XGHI + 0.5 * iUSD_XGHI +  0.5 * iEUR_XGHI ) / 2",
    "XGHIPC_SCH = ( XGHIPC + 0.5 * iUSD_XGHIPC +  0.5 * iEUR_XGHIPC ) / 2",
]

dfa = msp.panel_calculator(df, calcs, cids=cids_xg2)
df = msm.update_df(df, dfa)

2.1.6. Relative macro trend pressure versus G2 #

Due to their size, the G2 is likely to dominate the impact of macro trends on cross-section fixed income returns. In order to put greater emphasis on the information content non-G2 idiosyncratic trends, one can calculate these trends (and related returns) relative to the G2.

The convenience function make_relative_value() of the macrosynergy.panel module calculates values relative to an equally-weighted basket while adapting to missing periods of any of the basket cross sections. update_df() function in the macrosynergy management module concatenates two JPMaQS data frames, effectively adding the newly calculated relative indicators with postfix vG2 to the original data frame df .

xcatx = [
    "INTRGDP_NSA_P1M1ML12_3MMA",
    "INTRGDPv5Y_NSA_P1M1ML12_3MMA",
    "CPIC_SA_PALLvIET",
    "CPIH_SA_PALLvIET",
    "PCREDITBN_SJA_P1M1ML12vLTB",
    "PCREDITGDP_SJA_D1M1ML12vLTB",
    "XGHI",
    "XGHIPC",
]

dfa = msp.make_relative_value(
    df, xcats=xcatx, cids=cids_dux, basket=["EUR", "USD"], postfix="vG2"
)

df = msm.update_df(df, dfa)

Relative macro trends have very different dynamics, as exemplified by the relative excess growth trends below.

P.S.: Inspection of the below series emphasizes a potential weakness of their use as trading signal and basis of strategy research. Large fluctuations, in relative and absolute terms, are concentrated on the 2020/21 disruptions due to the COVID pa

xcatx = ["INTRGDPv5Y_NSA_P1M1ML12_3MMA", "INTRGDPv5Y_NSA_P1M1ML12_3MMAvG2"]
cidx = cids_xg2

msp.view_timelines(
    df,
    xcats=xcatx,
    cids=cids_dux,
    ncol=4,
    cumsum=False,
    start="2000-01-01",
    same_y=False,
    all_xticks=True,
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/5bb13777db8eef0e1827796bf5271b9dde67e79c2c31f630c652d22f94539059.png

Using relative values significantly reduces the positive correlations across non-G2 cross sections.

xcat = "XGHIPCvG2"
msp.correl_matrix(
    df,
    xcats=xcat,
    cids=cids_xg2,
    start="2000-01-01",
    cluster=True,
    title="International correlation of excess macro trends relative to the G2",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/62d7efd99d0874b1d68996217742dcfc2f9cfd549e0e43d08a50d96172669939.png

2.1.7. Global relative macro trend pressure #

Another way of generating relative macro trends is calculating differences versus a basket of all available or tradable markets at each time. The benefit of this approach is that it further enhances the influence of small countries’ idiosyncratic trends and reduces the correlation of cross-sectional signals. The disadvantage is that the meaning and quality of underlying data vary across countries, and relative trends in the data are less reliable indicators of actual trends. As before, make_relative_value() and update_df() are used to calculate new relative macro trend pressure indicators with postfix vGLB and add them to the original dataframe df .

xcatx = [
    "INTRGDP_NSA_P1M1ML12_3MMA",
    "INTRGDPv5Y_NSA_P1M1ML12_3MMA",
    "CPIC_SA_PALLvIET",
    "CPIH_SA_PALLvIET",
    "PCREDITBN_SJA_P1M1ML12vLTB",
    "PCREDITGDP_SJA_D1M1ML12vLTB",
    "XGHI",
    "XGHIPC",
]

dfa = msp.make_relative_value(
    df,
    xcats=xcatx,
    cids=cids_dux,
    rel_xcats=[xc + "vGLB" for xc in xcatx],
)
df = msm.update_df(df, dfa)

Relative trends to a broad basket have similar broad cycles and long-term patterns as trends relative to G2 but also occasional notable differences.

xcatx = ["INTRGDPv5Y_NSA_P1M1ML12_3MMAvG2", "INTRGDPv5Y_NSA_P1M1ML12_3MMAvGLB"]
cidx = cids_xg2

msp.view_timelines(
    df,
    xcats=xcatx,
    cids=cids_dux,
    ncol=4,
    cumsum=False,
    start="2000-01-01",
    same_y=False,
    all_xticks=True,
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/865c31419a17b492587ce67d770bca301448f24e013a71c09ac8169325a7ddb4.png

2.2. Targets #

2.2.1. Directional fixed-receiver IRS returns #

The analysis below focuses on 2-year and on 5-year IRS receiver returns on vol-targeted positions. The volatility targeting resets the position for each currency area at the beginning of each month to a level that produces 10% annualized volatility on a USD underlying risk capital. Such volatility targeting mimics a basic form of risk management and - more importantly for panel-based research - makes returns more comparable across currency areas.

Since daily returns are very volatile, it is typically more instructive to view them in cumulative form.

xcats_sel = ["DU02YXR_VT10"]
msp.view_timelines(
    df,
    xcats=xcats_sel,
    cids=cids_du,
    ncol=4,
    cumsum=True,
    start="2000-01-01",
    same_y=True,
    all_xticks=True,
    title="Cumulative duration return, in % of notional: 2-year maturity",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/54b1421158f92f939501f96f48d32610bb1f3f250ae617a770291dfcc401fc8b.png

The correlation of returns across currency areas has been mostly positive. Note that for the purpose of correlation analysis of returns, it is preferable to use weekly rather than daily returns to mitigate time zone effects, the distortion of daily correlation due to different trading hours.

xcat = "DU02YXR_VT10"
dfxx = df[df["xcat"] == xcat].set_index("real_date")
dfxx = dfxx.groupby(["cid", "xcat"]).resample(rule="W-FRI")["value"].sum()
dfxx = dfxx.reset_index()

msp.correl_matrix(
    dfxx,
    xcats=xcat,
    cids=cids_dux,
    start="2000-01-01",
    title="Correlation of weekly IRS receiver returns across markets",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/29c03f2a14e0adc7363d0d5d03959671b1ad9ec0ae613e92b880a5f53aec951f.png
xcat = "DU02YXR_VT10"
dfxx = df[df["xcat"] == xcat].set_index("real_date")
dfxx = dfxx.groupby(["cid", "xcat"]).resample(rule="W-FRI")["value"].sum()
dfxx = dfxx.reset_index()

msp.correl_matrix(
    dfxx,
    xcats=xcat,
    cids=cids_dux,
    start="2000-01-01",
    title="Correlation of weekly IRS receiver returns across markets",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/29c03f2a14e0adc7363d0d5d03959671b1ad9ec0ae613e92b880a5f53aec951f.png

2.2.2. Relative fixed-receiver IRS returns versus G2 #

IRS returns relative to the G2 can be calculated in the same way as the relative excess macro trend using make_relative_value() of the macrosynergy.panel module and update_df() .

dfa = msp.make_relative_value(
    df,
    xcats=["DU02YXR_VT10", "DU05YXR_VT10"],
    cids=cids_dux,
    blacklist=fxblack,
    basket=["EUR", "USD"],
    rel_xcats=["DU02YXR_VT10vG2", "DU05YXR_VT10vG2"],
)
df = msm.update_df(df, dfa)

Naturally, both short- and medium-term dynamics of relative returns are notably different from absolute returns.

xcats_sel = ["DU02YXR_VT10", "DU02YXR_VT10vG2"]
cidx = cids_xg2

msp.view_timelines(
    df,
    xcats=xcats_sel,
    cids=cidx,
    ncol=4,
    cumsum=True,
    start="2000-01-01",
    same_y=True,
    all_xticks=True,
    title="Cumulative fixed IRS receiver returns versus G2 (U.S. and euro area)",
    xcat_labels=None,
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/e38c6d171aca4fc27fad35d9a940fd8b95f284a3272a6c242f8f53de668d8c7c.png

Importantly, unlike excess macro trends, relative values of returns retain their dominant positive correlation. While some directional communality is removed by looking at returns relative to the G2, the communality of the relative benchmark is added.

xcat = "DU02YXR_VT10vG2"
dfxx = df[df["xcat"] == xcat].set_index("real_date")
dfxx = dfxx.groupby(["cid", "xcat"]).resample(rule="W-FRI")["value"].sum()
dfxx = dfxx.reset_index()
msp.correl_matrix(
    dfxx,
    xcats=xcat,
    cids=cids_xg2,
    start="2000-01-01",
    title="Correlation of weekly 2-year IRS receiver returns versus G2 across markets",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/bcc588e26917a9ac47e908735bba54c4bb186b7297f29f108f4668ddd672deae.png

2.2.3. Global relative fixed-receiver IRS returns #

Again, relative returns to a global basket are calculated similarly to relative returns of excess macro trends to keep signals and target positions conceptually aligned. The same method is used here: first calculate new relative indicators using make_relative_value() of the macrosynergy.panel module and then add new indicators to the original dataframe using update_df() function in the macrosynergy management module.

dfa = msp.make_relative_value(
    df,
    xcats=["DU02YXR_VT10", "DU05YXR_VT10"],
    cids=cids_dux,
    blacklist=fxblack,
    rel_xcats=["DU02YXR_VT10vGLB", "DU05YXR_VT10vGLB"],
)
df = msm.update_df(df, dfa)

Returns relative to a global basket often show similar long-term patterns as returns relative to the G2, but very different short-term dynamics.

xcats_sel = ["DU02YXR_VT10vG2", "DU02YXR_VT10vGLB"]
msp.view_timelines(
    df,
    xcats=xcats_sel,
    cids=cids_dux,
    ncol=4,
    cumsum=True,
    start="2000-01-01",
    same_y=True,
    all_xticks=True,
    title="Cumulative fixed 2-year IRS receiver returns, absolute and relative",
    xcat_labels=["10% vol-target returns", "relative to global basket"],
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/ce34c8f0430bb371894036e18eee2c85af168d84d131a55e5abd5f6278e5085e.png

The most important effect of using a broad basket as a benchmark for relative returns is that the dominant positive correlation is (naturally) removed. On the positive side, this means more idiosyncratic information can be brought to bear, and positions are more diversified. On the negative side, this means that leverage will be higher for a given volatility target of the strategy.

xcat = "DU02YXR_VT10vGLB"
dfxx = df[df["xcat"] == xcat].set_index("real_date")
dfxx = dfxx.groupby(["cid", "xcat"]).resample(rule="W-FRI")["value"].sum()
dfxx = dfxx.reset_index()
msp.correl_matrix(
    dfxx,
    xcats=xcat,
    cids=cids_dux,
    start="2000-01-01",
    title="Correlation of weekly relative 2-year IRS receiver returns across markets",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/c1071bb233b1913c20d5e4522fadadca81f62a6e3d17219f46044a50fabe3b32.png

3. Value checks #

In this part of the analysis, the notebook calculates the naive PnLs (Profit and Loss) for financial returns (duration returns in this notebook) using the previously derived composite indicators as well as their constituents. The PnLs are calculated based on simple trading strategies that utilize the indicators as signals (no regression is involved). The strategies involve going long (buying) or short (selling) on returns based purely on the direction of the score signals. The 4 simple macro strategies analyzed in this notebook are:

  • G2 directional macro trend : analysis of predictive power of macro trend pressure on subsequent interest rate swap receiver returns (on vol-targeted positions) in the two large currency areas - USD and EUR

  • non-G2 directional macro trend : simple directional macro trend pressure and subsequent IRS receiver returns (on vol-targeted positions) in other currency areas

  • non-G2 relative to G2 macro trend : predictive power of macro trends of the smaller countries relative to the G2 on vol-targeted IRS returns relative to similar returns in the G2

  • global relative pressure factors : checks the value of relative macro trends for each country versus an average of all other (available and tradable) countries on vol-targeted IRS returns relative to all available countries.

To evaluate the performance of these strategies, the notebook computes various metrics and ratios, including:

  • Correlation: Measures the relationship between indicator changes and consequent financial returns. Positive correlations indicate that the strategy moves in the same direction as the market, while negative correlations indicate an opposite movement.

  • Accuracy Metrics: These metrics assess the accuracy of the confidence score-based strategies in predicting market movements. Standard accuracy metrics include accuracy rate, balanced accuracy, precision, etc.

  • Performance Ratios: Various performance ratios, such as Sharpe ratio, Sortino ratio, Max draws, etc.

The notebook compares the performance of the simple strategies with the long-only performance of the respective asset classes.

It’s important to note that the analysis deliberately disregards transaction costs and risk management considerations. This is done to provide a more straightforward comparison of the strategies’ raw performance without the additional complexity introduced by transaction costs and risk management, which can vary based on trading size, institutional rules, and regulations.

3.3. Non-G2 relative to G2 pressure factors #

The below strategy type aligns idiosyncratic macro trends of the smaller countries relative to the G2 with vol-targeted IRS returns relative to similar returns in the G2.

To enable consistent and efficient analysis across various hypotheses, we have developed a custom dictionary, named dict_relg2 , tailored for the third hypothesis. This dictionary is designed with specific keys to streamline data handling and analysis:

  • sigs : A list of selected, plausible trading signals, which can later be analyzed in comparison to each other [“XGHIPCvG2”, “XGHIvG2”, “INTRGDPv5Y_NSA_P1M1ML12_3MMAvG2”, “CPIH_SA_PALLvIETvG2”, “CPIC_SA_PALLvIETvG2”, “PCREDITBN_SJA_P1M1ML12vLTBvG2”].

  • targs : A list of selected targets, specifically returns, which we aim to surpass using the chosen signals. Here, we are choosing the relative IRS returns [“DU02YXR_VT10vG2”, “DU05YXR_VT10vG2”]

  • cids : A list identifying different cross-sections for detailed analysis cids_xg2 - a list of all available currencies, excluding USD and EUR.

  • start : The start date for the data period under analysis, set to “2000-01-01”.

  • cr : short for “Category Relations”: class from the macrosynnergy.panel package, designed to organize panels of signals and targets into formats suitable for analysis. This key will be populated post-calculations.

  • freqs : A list of frequencies, including M for monthly and Q for quarterly.

  • srr : Short for “Signal Returns Relations,” this key is used to compute the relationships between panels of the selected trading signals ( sigs ) and the panels of subsequent returns from the list ( targs ). This key will be populated post-calculations.

  • pnls : This key will later contain the time series of naive trading Profit and Loss (PnL), calculated by using each signal from the sigs list as the trading signal for the corresponding return from the targs list.

Specs #

Similar to the directional strategies, we start by defining a dictionary with the key parameters necessary for evaluating a specific type of strategy. This includes setting the signals, target returns, cross-sections, initial date, and analysis frequencies for evaluating a relative (to G2) IRS strategy for smaller currencies. Additionally, to facilitate easier interpretation, we define a dictionary of labels where the keys are the technical names of the indicators and the values provide their explanations.

relg2_labels = {
    "XGHIPCvG2": "Broad macro trend pressure vs G2",
    "XGHIvG2": "Growth and inflation trend pressure vs G2",
    "INTRGDPv5Y_NSA_P1M1ML12_3MMAvG2": "Excess real GDP growth trend vs G2",
    "CPIH_SA_PALLvIETvG2": "Excess CPI headline inflation trend vs G2",
    "CPIC_SA_PALLvIETvG2": "Excess CPI core inflation trend vs G2",
    "PCREDITBN_SJA_P1M1ML12vLTBvG2": "Excess private credit growth trend vs G2",
    "DU02YXR_VT10vG2": "2-year IRS receiver vs G2, vol targeted",
    "DU05YXR_VT10vG2": "5-year IRS receiver vs G2, vol targeted"
}


dict_relg2 = {
    "sigs": [key for key in relg2_labels.keys()][:6],
    "targs": ["DU02YXR_VT10vG2", "DU05YXR_VT10vG2"],
    "cids": cids_xg2,
    "start": "2000-01-01",
    "freqs": ["M", "Q"],
    "cr": None,
    "srr": None,
    "pnls": None,
}

The negative correlation between relative macro trends and subsequent returns has been less pronounced than for absolute values. This plausibly reflects the greater statistical uncertainty around relative trends. However, it was still significant and confirmed that trend effects were not all dominated by a single global or G2 factor. This creates confidence in the signals universal applicability in trading strategies.

Instances of the CategoryRelations class from the macrosynnergy.panel package are designed to organize panels of features and targets into formats suitable for analysis. This class provides functionalities for frequency conversion, adding lags, and trimming outliers. The outcome of the CategoryRelations calculation is subsequently added to the custom dictionary dict_relg2 under the key cr .

dix = dict_relg2

sigx = dix["sigs"]  # List of signal names
targ = dix["targs"][0]  # Assuming only one target for simplicity
cidx = dix["cids"]  # cids selection
start = dix["start"]

cr_relg2 = {}

for sig in sigx:
    cr_relg2[f"cr_{sig}"] = msp.CategoryRelations(
        df,
        xcats=[sig, targ],
        cids=cidx,
        freq="M",
        lag=1,
        xcat_aggs=["last", "sum"],
        blacklist=fxblack,
        start=start,
        xcat_trims=[None, None]
    )

dix["cr"] = cr_relg2
dix = dict_relg2
cr = dix["cr"]

msv.multiple_reg_scatter(
        [cr["cr_XGHIPCvG2"], cr["cr_XGHIvG2"], cr["cr_INTRGDPv5Y_NSA_P1M1ML12_3MMAvG2"], cr["cr_CPIH_SA_PALLvIETvG2"], cr["cr_CPIC_SA_PALLvIETvG2"], cr["cr_PCREDITBN_SJA_P1M1ML12vLTBvG2"]],
        title="Broad and sectorial relative vs G2 macro pressure indicators and subsequent 2-year relative interest rate swap receiver returns vs G2, since 2000",
        ylab="Next month's return on 2-year IRS return, vol-targeted position, %",
        ncol=3,
        nrow=2,
        figsize=(15, 8),
        prob_est="map",
        coef_box="lower left", 
        subplot_titles=[lab for lab in list(relg2_labels.values())[0:6]])
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/b6c00fe2670a1fff9ff811dcfc593436c3eeac7e4098dac1210e618414118073.png

Accuracy and correlation check #

Also, accuracy scores have been lower for relative trends and returns. However, the reduced correlation of signals means that the incremental contribution of each signal to a global PnL has also been higher. Put simply, there has been a trade-off between quality and information value. SignalReturnRelations is used again to collect the signal-return relationships for the chosen frequencies. The result of the SignalReturnRelations calculation is then added to the custom dictionary dict_relg2 under the key srr

dix = dict_relg2

sigx = dix["sigs"]
targx = dix["targs"]
cidx = dix["cids"]
start = dix["start"]
freqs = dix["freqs"]

srr = mss.SignalReturnRelations(
    df,
    cids=cidx,
    sigs=sigx,
    blacklist=fxblack,
    sig_neg=[True] * len(sigx),
    rets=targx,
    freqs=freqs,
    start=start,
)

dix["srr"] = srr
dix = dict_relg2

srrx = dix["srr"]
freqs = dix["freqs"]
targx = dix["targs"]

# Table with renamed rows
dict_repl = dict(
    zip(
        [key + "_NEG" for key in relg2_labels.keys()],
        [val + " (neg)" for val in relg2_labels.values()],
    )
)
tbxx = (
    srrx.multiple_relations_table(signal_name_dict=dict_repl, freqs=freqs)
    .reset_index(level=["Aggregation"], drop=True)
    .reset_index()
)

# for column modifications
dict_cols = {
    "Signal" : "Signal",
    "Frequency": "Frequency",
    "accuracy": "Accuracy",
    "bal_accuracy": "Balanced accuracy",
    "pos_sigr": "Share of positive signals",
    "pos_retr": "Share of positive returns",
    "pearson": "Pearson coefficient",
    "kendall": "Kendall coefficient",
}

for xr in targx:

    tbx_xr = tbxx.loc[tbxx.Return == xr, list(dict_cols.keys())]
    tbx_xr.rename(columns=dict_cols, inplace=True)
 
    # Preserve the order of appearance for 'Signal'
    signals_order = tbx_xr['Signal'].unique()

    # Create a new DataFrame for sorted results
    sorted_dfs = []
    for signal in signals_order:
        # Filter data for current signal
        group = tbx_xr[tbx_xr['Signal'] == signal]

        # Sort 'Frequency' within the current 'Signal'
        sorted_group = group.sort_values(by='Frequency')
        
        # Append sorted group to list
        sorted_dfs.append(sorted_group)

    # Concatenate all sorted groups into one DataFrame
    tbx_xr = pd.concat(sorted_dfs)

    # Set the multi-index after sorting
    tbx_xr.set_index(["Signal", "Frequency"], inplace=True)


# apply style and heading

    tbx_xr = tbx_xr.style.format("{:.2f}").set_caption(
        f"Predictive accuracy and correlation with respect to {relg2_labels[xr]} returns").set_table_styles(
        [{"selector": "caption", "props": [("text-align", "center"), ("font-weight", "bold"), ("font-size", "17px")]}])

    display(tbx_xr)
Predictive accuracy and correlation with respect to 2-year IRS receiver vs G2, vol targeted returns
Accuracy Balanced accuracy Share of positive signals Share of positive returns Pearson coefficient Kendall coefficient
Signal Frequency
Broad macro trend pressure vs G2 (neg) M 0.53 0.53 0.42 0.50 0.07 0.05
Q 0.54 0.54 0.42 0.50 0.11 0.09
Growth and inflation trend pressure vs G2 (neg) M 0.52 0.52 0.55 0.50 0.02 0.03
Q 0.53 0.53 0.56 0.50 0.02 0.04
Excess real GDP growth trend vs G2 (neg) M 0.51 0.51 0.55 0.50 0.00 0.01
Q 0.50 0.50 0.55 0.50 -0.01 0.02
Excess CPI headline inflation trend vs G2 (neg) M 0.52 0.52 0.57 0.50 0.03 0.04
Q 0.54 0.54 0.56 0.50 0.05 0.05
Excess CPI core inflation trend vs G2 (neg) M 0.51 0.51 0.48 0.50 -0.00 0.01
Q 0.51 0.51 0.48 0.51 -0.00 0.01
Excess private credit growth trend vs G2 (neg) M 0.53 0.53 0.36 0.50 0.07 0.05
Q 0.56 0.56 0.36 0.50 0.12 0.10
Predictive accuracy and correlation with respect to 5-year IRS receiver vs G2, vol targeted returns
Accuracy Balanced accuracy Share of positive signals Share of positive returns Pearson coefficient Kendall coefficient
Signal Frequency
Broad macro trend pressure vs G2 (neg) M 0.53 0.53 0.42 0.51 0.05 0.04
Q 0.53 0.53 0.42 0.52 0.09 0.07
Growth and inflation trend pressure vs G2 (neg) M 0.52 0.52 0.55 0.51 0.02 0.02
Q 0.53 0.53 0.56 0.52 0.01 0.03
Excess real GDP growth trend vs G2 (neg) M 0.50 0.50 0.55 0.51 0.00 0.01
Q 0.51 0.51 0.55 0.52 -0.02 0.01
Excess CPI headline inflation trend vs G2 (neg) M 0.53 0.53 0.57 0.51 0.03 0.04
Q 0.53 0.53 0.56 0.52 0.05 0.05
Excess CPI core inflation trend vs G2 (neg) M 0.51 0.51 0.48 0.51 0.00 0.01
Q 0.51 0.51 0.48 0.52 0.01 0.01
Excess private credit growth trend vs G2 (neg) M 0.53 0.53 0.36 0.51 0.05 0.04
Q 0.54 0.54 0.36 0.52 0.11 0.09

All signal versions and constituents display above 50% accuracy ratios, but not all have significant correlation.

PnLs #

PnLs are now generated for relative positions versus the G2, which in practice would require more leverage and cause higher transaction costs than simple directional strategies. Since leverage constraints and transaction costs are not considered here, comparisons with directional strategy are not fully valid.

The NaivePnL class of the macrosynergy.pnl module is the basis for calculating simple stylized PnLs for various signals under consideration of correlation benchmarks.

The related make_pnl() method calculates and stores generic PnLs based on a range of signals and their transformations into positions. The positioning options include the choice of trading frequency, z-scoring, simple equal-size long-short positions (-1/1) thresholds to prevent outsized positions, and rebalancing slippage. The generated PnLs are, however, naive insofar as they do not consider trading costs and plausible risk management restrictions. Also, if a volatility scale is set, this is done so ex-post, mainly for the benefit of plotting different signals’ PnLs in a single chart.

A complementary method is make_long_pnl() , which calculates a “long-only” PnL based on a uniform long position across all markets at all times. This often serves as a benchmark for gauging the benefits of active trading.

dix = dict_relg2

sigx = dix["sigs"]
targx = dix["targs"][0]
cidx = dix["cids"]
start = dix["start"]

naive_pnl = msn.NaivePnL(
    df,
    ret=targx,
    sigs=sigx,
    blacklist=fxblack,
    cids=cidx,
    start=start,
    bms=["USD_EQXR_NSA", "USD_GB10YXR_NSA"],
)

for sig in sigx:
    naive_pnl.make_pnl(
        sig,
        sig_neg=True,
        sig_op="zn_score_pan",
        thresh=2,
        rebal_freq="monthly",
        vol_scale=10,
        rebal_slip=1,
        pnl_name=sig + "_PZN",
    )

naive_pnl.make_long_pnl(vol_scale=10, label="Long versus G2")

dix["pnls"] = naive_pnl

Relative trend-based positioning has produced fairly consistent value generation most of the time. It failed to do so during and after the pandemic when relative macro trends were particularly difficult to read. Simple cumulative PnLs in the class instance can be plotted with the plot_pnls() method.

dix = dict_relg2

start = dix["start"]
cidx = dix["cids"]
sigx = dix["sigs"]
pnlx = dix["pnls"]

pnls = [sig + "_PZN" for sig in sigx[:2]] + ["Long versus G2"]

pnl_relg2={key + "_PZN": value for key, value in relg2_labels.items()}
pnl_relg2_labels = {key: pnl_relg2[key] for key in list(pnl_relg2)[:2]}
pnl_relg2_labels["Long versus G2"] = "Long versus G2"


pnlx.plot_pnls(
    pnl_cats=pnls,
    title="Naive PnLs of relative macro pressure-based IRS strategies vs G2 countries",
    xcat_labels=pnl_relg2_labels,
    title_fontsize=16,
    ylab="% of risk capital, for 10% annualized long-term vol, no compounding",
    pnl_cids=["ALL"],
    start=start,
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/2a1303e7184a6354dca389aee4e10a297d0eb6da0666a77af20381b2afaebb6b.png

The broad relative macro excess trend (including credit) produced much more naive value than the narrow trend.

dix = dict_relg2

start = dix["start"]
pnlx = dix["pnls"]
sigx = dix["sigs"]
pnlx = dix["pnls"]

pnls = [sig + "_PZN" for sig in sigx[:2]] + ["Long versus G2"]

df_eval = pnlx.evaluate_pnls(
    pnl_cats=pnls,
    pnl_cids=["ALL"],
    start=start,
)

df_eval = df_eval.rename(columns=pnl_relg2_labels)

# apply style and heading

df_eval = df_eval.style.format("{:.2f}").set_caption(
    f"Performance metrics of relative (vs G2) macro pressure-based IRS strategies position"
    ).set_table_styles(
    [{"selector": "caption", "props": [("text-align", "center"), ("font-weight", "bold"), ("font-size", "17px")]}
    ])

display(df_eval)
Performance metrics of relative (vs G2) macro pressure-based IRS strategies position
xcat Long versus G2 Broad macro trend pressure vs G2 Growth and inflation trend pressure vs G2
Return % -0.76 7.62 3.42
St. Dev. % 10.00 10.00 10.00
Sharpe Ratio -0.08 0.76 0.34
Sortino Ratio -0.11 1.12 0.49
Max 21-Day Draw % -10.79 -12.96 -18.35
Max 6-Month Draw % -23.42 -15.01 -31.06
Peak to Trough Draw % -55.26 -20.43 -50.37
Top 5% Monthly PnL Share -4.91 0.63 1.20
USD_EQXR_NSA correl 0.18 -0.15 -0.06
USD_GB10YXR_NSA correl -0.44 0.21 0.01
Traded Months 293.00 293.00 293.00

Relative signals have been a lot more diverse than absolute signals. On the positive side, this enhances diversification of exposure. On the negative side, this can lead to very uneven value at risk of the strategy and challenges for risk management. Also, the instability of the relative signal around the pandemic suggests that the value of relative macro trend signals, as opposed to absolute trend signals, may be compromised at times of crisis and distortions. This reflects that the timing of the distortion influence may vary across countries.

dix = dict_relg2
pnlx = dix["pnls"]
sigx = dix["sigs"]

pnlx.signal_heatmap(pnl_name="XGHIPCvG2_PZN", 
                    freq="q", 
                    title=f"Average applied signal values for PnL based on broad relative macro pressure trend",   
                    figsize=(15, 8))
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/4c6ac87cb3e5163f6e6fc4ea2f685c90783f8f905b4e30e85a4bc2e97ec1650b.png

3.4. Global relative pressure factors #

This section checks the value of relative macro trends for each country versus an average of all other (available and tradable) countries. This signal focuses fully on the idiosyncratic macro excess trend of each country, making no distinction between large and small countries or developed and emerging countries. Statistical uncertainty around the signals would be particularly high, but independence of the cross sectional signals would also be particularly pronounced.

To ensure consistent computations with the hypotheses above, we also establish a custom dictionary with specific keys designed to streamline the analysis, dict_global , including:

  • sigs : A list of selected, plausible trading signals, which can later be analyzed in comparison to each other. [“XGHIPCvGLB”, “XGHIvGLB”, “INTRGDPv5Y_NSA_P1M1ML12_3MMAvGLB”, “CPIH_SA_PALLvIETvGLB”, “CPIC_SA_PALLvIETvGLB”, “PCREDITBN_SJA_P1M1ML12vLTBvGLB”]

  • targs : A list of selected targets, specifically returns, which we aim to surpass using the chosen signals [“DU02YXR_VT10vGLB”, “DU05YXR_VT10vGLB”].

  • cids : A list identifying different cross-sections for detailed analysis, a list of all available tradeable markets cids_dux

  • start : The start date for the data period under analysis, set to "2000-01-01" .

  • freqs : A list of frequencies, including M for monthly and Q for quarterly.

  • cr : short for “Category Relations”: class from the macrosynnergy.panel package, designed to organize panels of signals and targets into formats suitable for analysis. This key will be populated post-calculations.

  • srr : Short for “Signal Returns Relations,” this key is used to compute the relationships between panels of the selected trading signals ( sigs ) and the panels of subsequent returns from the list ( targs ). This key will be populated post-calculations.

  • pnls : This key will later contain the time series of naive trading Profit and Loss (PnL), calculated by using each signal from the sigs list as the trading signal for the corresponding return from the targs list.

Specs and correlation #

Here we also start by defining a dictionary with the key parameters necessary for evaluating a global relative IRS strategy. This includes setting the signals, target returns, cross-sections, initial date, and analysis frequencies for evaluating a global relative IRS strategy for all available markets. Additionally, to facilitate easier interpretation, we define a dictionary of labels where the keys are the technical names of the indicators and the values provide their explanations.

global_labels = {
    "XGHIPCvGLB": "Broad macro trend pressure relative to global basket",
    "XGHIvGLB": "Growth and inflation trend pressure relative to global basket",
    "INTRGDPv5Y_NSA_P1M1ML12_3MMAvGLB": "Excess real GDP growth trend relative to global basket",
    "CPIH_SA_PALLvIETvGLB": "Excess headline CPI inflation trend relative to global basket",
    "PCREDITBN_SJA_P1M1ML12vLTBvGLB": "Excess private credit growth trend relative to global basket",
    "DU02YXR_VT10vGLB": "2-year IRS relative vol targeted returns", 
    "DU05YXR_VT10vGLB": "5-year IRS relative vol targeted returns" 
}

dict_global = {
    "sigs": [key for key in global_labels.keys()][:5],
    "targs": ["DU02YXR_VT10vGLB", "DU05YXR_VT10vGLB"],
    "cids": cids_dux,
    "start": "2002-01-01",
    "freqs": ["M", "Q"],
    "cr": None,
    "srr": None,
    "pnls": None,
}

The significant negative correlation holds for purely relative signals and subsequent returns. Indeed it has been statistically significant for weekly, monthly, and quarterly horizons.

Instances of the CategoryRelations class from the macrosynnergy.panel package are designed to organize panels of features and targets into formats suitable for analysis. This class provides functionalities for frequency conversion, adding lags, and trimming outliers. The outcome of the CategoryRelations calculation is subsequently added to the custom dictionary dict_global under the key cr . The method reg_scatter within the class is used to display a correlation scatter plot of either the complete pooled data or a specific subset. For simultaneous analysis of multiple relationships, the multiple_reg_scatter() method can be employed.

dix = dict_global

sigx = dix["sigs"]  # List of signal names
targx = dix["targs"][0]  # Assuming only one target for simplicity
cidx = dix["cids"]  # cids selection
start = dix["start"]


cr_global = {}

for sig in sigx:
    cr_global[f"cr_{sig}"] = msp.CategoryRelations(
        df,
        xcats=[sig, targx],
        cids=cidx,
        freq="M",
        lag=1,
        xcat_aggs=["last", "sum"],
        blacklist=fxblack,
        start=start,
        xcat_trims=[None, None]
    )

dix["cr"] = cr_global 

multiple_reg_scatter() method allows comparison of several pairs of two categories relationships side by side, including the strength of the linear association and any potential outliers. By default, it includes a regression line with a 95% confidence interval, which can help assess the significance of the relationship.

The prob_est argument in this context is used to specify which type of estimator to use for calculating the probability of a significant relationship between the feature category and the target category. prob_est is "map" , which stands for “Macrosynergy panel test”. Often, cross-sectional experiences are not independent and subject to common factors. Simply stacking data can lead to “pseudo-replication” and overestimated significance of correlation. A better method is to check significance through panel regression models with period-specific random effects. This technique adjusts targets and features of the predictive regression for common (global) influences. The stronger these global effects, the greater the weight of deviations from the period-mean in the regression. In the presence of dominant global effects, the test for the significance of a feature would rely mainly upon its ability to explain cross-sectional target differences. Conveniently, the method automatically accounts for the similarity of experiences across sections when assessing the significance and, hence, can be applied to a wide variety of features and targets. View a related research post here that provides more information on this approach.

dix = dict_global
cr = dix["cr"]

cr["cr_XGHIPCvGLB"].reg_scatter(
    labels=False,
    coef_box="lower left",
    title="Relative broad macro trend pressure and subsequent relative 2-year IRS returns, EM/DM since 2002",
    xlab="Broad macro trend pressure of country relative to global average, end-of-month information state",
    ylab="Next month's relative vol-targeted 2-year IRS return, %",
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/5882315795df3e273b638ee3e077a1c541c57310b60f52d8dac5a95411700a8c.png
dix = dict_global
cr = dix["cr"]


msv.multiple_reg_scatter(
        cat_rels=[cr["cr_"+ key] for key in list(global_labels.keys())[1:5]],
        title="Relative macro pressure indicators and subsequent relative 2-year IRS returns, EM/DM since 2002",
        ylab="Next month's return on 2-year IRS vol-targeted position (vs global basket), %",
        ncol=2,
        nrow=2,
        figsize=(15, 10),
        prob_est="map",
        coef_box="lower left", 
        subplot_titles=[lab for lab in list(global_labels.values())[1:5]])
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/adf4eedc06b524664feedd6179394a5c7996c02302fc2bd8238b84019fd355e2.png

Negative correlation has been confirmed both various splits of the sample and most cross sections, despite some massive outliers that have been unrelated to macro trends.

dix = dict_global
cr = dix["cr"]

cr["cr_XGHIPCvGLB"].reg_scatter(
    labels=False,
    coef_box="upper left",
    title="Relative broad macro trend pressure and subsequent relative 2-year IRS returns, EM/DM since 2002",
    xlab="Broad macro trend pressure of country relative to global average, end-of-month information state",
    ylab="Next month's relative vol-targeted 2-year IRS return, %",
    separator=2012,
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/22481d0e850ad8d6468a3e130bbc329600f2b43bdc005676acacaa4c49d89bef.png

Accuracy and correlation check #

The SignalReturnRelations class of the macrosynergy.signal module collects the main positioning factor and its key rivals (or constituents), and brings them into the right format by appropriate frequency conversion (which should correspond to the envisaged trading frequency) and whether the signal is supposed to predict the return positively or negatively. The result of the SignalReturnRelations calculation is then added to the custom dictionary dict_global .

Accuracy and balanced accuracy for the broadest relative macro trend have actually been higher than for relative trends versus the G2.

dix = dict_global

sigx = dix["sigs"]
targx = dix["targs"]
cidx = dix["cids"]
start = dix["start"]
freqs = dix["freqs"]

srr = mss.SignalReturnRelations(
    df,
    cids=cidx,
    sigs=sigx,
    sig_neg=[True] * len(sigx),
    rets=targx,
    freqs=freqs,
    start=start,
    blacklist=fxblack,
)

dix["srr"] = srr

The multiple_relations_table() method, part of the SignalReturnRelations class within the macrosynergy.signal module, facilitates the comparison of multiple signal-return relationships within a single table. This method is particularly useful for evaluating the efficacy of various signals against identical return series across potentially different financial returns and frequencies. The table is structured such that the first column displays the target (financial return), and the second column shows the name of the signal, with _NEG appended to indicate a negative relationship to the named return. The table also includes standard columns that conform to established metrics such as accuracy, balanced accuracy, and others, which are detailed in MacroSynergy’s documentation on summary tables . This structured approach allows for an intuitive and comprehensive assessment of signal performance in relation to various financial returns.

dix = dict_global

srrx = dix["srr"]
freqs = dix["freqs"]
targx = dix["targs"]

dict_repl = dict(
    zip(
        [key + "_NEG" for key in global_labels.keys()],
        [val + " (neg)" for val in global_labels.values()],
    )
)
tbxx = (
    srrx.multiple_relations_table(signal_name_dict=dict_repl, freqs=freqs)
    .reset_index(level=["Aggregation"], drop=True)
    .reset_index()
)

dict_cols = {
    "Signal": "Signal",
    "Frequency": "Frequency",
    "accuracy": "Accuracy",
    "bal_accuracy": "Balanced accuracy",
    "pos_sigr": "Share of positive signals",
    "pos_retr": "Share of positive returns",
    "pearson": "Pearson coefficient",
    "kendall": "Kendall coefficient",
}

for xr in targx:

    tbx_xr = tbxx.loc[tbxx.Return == xr, list(dict_cols.keys())]
    tbx_xr.rename(columns=dict_cols, inplace=True)

    
    # Preserve the order of appearance for 'Signal'
    signals_order = tbx_xr['Signal'].unique()

    # Create a new DataFrame for sorted results
    sorted_dfs = []
    for signal in signals_order:
        # Filter data for current signal
        group = tbx_xr[tbx_xr['Signal'] == signal]

        # Sort 'Frequency' within the current 'Signal'
        sorted_group = group.sort_values(by='Frequency')
        
        # Append sorted group to list
        sorted_dfs.append(sorted_group)

    # Concatenate all sorted groups into one DataFrame
    tbx_xr = pd.concat(sorted_dfs)

    # Set the multi-index after sorting
    tbx_xr.set_index(["Signal", "Frequency"], inplace=True)


# apply style and heading

    tbx_xr = tbx_xr.style.format("{:.2f}").set_caption(
        f"Predictive accuracy and correlation with respect to {global_labels[xr]}").set_table_styles(
        [{"selector": "caption", "props": [("text-align", "center"), ("font-weight", "bold"), ("font-size", "17px")]}])

    display(tbx_xr)
Predictive accuracy and correlation with respect to 2-year IRS relative vol targeted returns
Accuracy Balanced accuracy Share of positive signals Share of positive returns Pearson coefficient Kendall coefficient
Signal Frequency
Broad macro trend pressure relative to global basket (neg) M 0.53 0.53 0.59 0.52 0.07 0.05
Q 0.54 0.54 0.59 0.51 0.10 0.08
Growth and inflation trend pressure relative to global basket (neg) M 0.52 0.51 0.57 0.52 0.03 0.03
Q 0.51 0.51 0.57 0.51 0.01 0.02
Excess real GDP growth trend relative to global basket (neg) M 0.51 0.51 0.51 0.52 0.01 0.02
Q 0.50 0.50 0.51 0.51 -0.01 0.01
Excess headline CPI inflation trend relative to global basket (neg) M 0.52 0.52 0.59 0.52 0.03 0.03
Q 0.52 0.52 0.59 0.51 0.04 0.03
Excess private credit growth trend relative to global basket (neg) M 0.53 0.53 0.58 0.52 0.07 0.05
Q 0.54 0.54 0.59 0.51 0.11 0.08
Predictive accuracy and correlation with respect to 5-year IRS relative vol targeted returns
Accuracy Balanced accuracy Share of positive signals Share of positive returns Pearson coefficient Kendall coefficient
Signal Frequency
Broad macro trend pressure relative to global basket (neg) M 0.52 0.52 0.59 0.51 0.05 0.04
Q 0.53 0.53 0.59 0.51 0.08 0.06
Growth and inflation trend pressure relative to global basket (neg) M 0.52 0.52 0.57 0.51 0.02 0.02
Q 0.50 0.50 0.57 0.51 -0.00 0.01
Excess real GDP growth trend relative to global basket (neg) M 0.52 0.52 0.51 0.51 0.00 0.01
Q 0.49 0.49 0.51 0.51 -0.02 -0.00
Excess headline CPI inflation trend relative to global basket (neg) M 0.52 0.52 0.59 0.51 0.02 0.03
Q 0.53 0.52 0.59 0.51 0.03 0.03
Excess private credit growth trend relative to global basket (neg) M 0.52 0.52 0.58 0.51 0.06 0.04
Q 0.54 0.54 0.59 0.51 0.10 0.07

The detailed scoring of the predictive value of relative macro trends suggests that it relies mainly on relative credit trends. Relative growth and inflation have been less successful in predicting relative swap returns.

dix = dict_global
srrx = dix["srr"]
targx = dix["targs"]

srrx.accuracy_bars(
    sigs = "XGHIPCvGLB_NEG",
    ret = "DU02YXR_VT10vGLB",
    freq = "M",
    type="years",
    title="Monthly accuracy of relative macro pressure signals for relative 2-year IRS returns, all DM/EM",
    size=(14, 6),
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/020c6b798337bebdd4b0f522e45f476ebbac74e50b23289b8d38c0735c0d7f38.png

PnLs #

Relative trend PnLs have produced good naive value but likely incurred much higher trading costs than directional strategies. The consistency of the value generation suggests, however, that relative macro trends are a valid guidance for the allocation of duration exposure across currency areas.

The NaivePnL class of the macrosynergy.pnl module is the basis for calculating simple stylized PnLs for various signals under consideration of correlation benchmarks.

The related make_pnl() method calculates and stores generic PnLs based on a range of signals and their transformations into positions. The positioning options include the choice of trading frequency, z-scoring, simple equal-size long-short positions (-1/1) thresholds to prevent outsized positions, and rebalancing slippage. The generated PnLs are, however, naive insofar as they do not consider trading costs and plausible risk management restrictions. Also, if a volatility scale is set, this is done so ex-post, mainly for the benefit of plotting different signals’ PnLs in a single chart.

A complementary method is make_long_pnl() , which calculates a “long-only” PnL based on a uniform long position across all markets at all times. This often serves as a benchmark for gauging the benefits of active trading.

dix = dict_global

sigx = dix["sigs"]
targx = dix["targs"][0]
cidx = dix["cids"]
start = dix["start"]

naive_pnl = msn.NaivePnL(
    df,
    ret=targx,
    sigs=sigx,
    cids=cidx,
    blacklist=fxblack,
    start=start,
    
    bms=["USD_EQXR_NSA", "USD_GB10YXR_NSA"],
)

for sig in sigx:
    naive_pnl.make_pnl(
        sig,
        sig_neg=True,
        sig_op="zn_score_pan",
        thresh=2,
        rebal_freq="monthly",
        vol_scale=10,
        rebal_slip=1,
        pnl_name=sig + "_PZN",
    )

naive_pnl.make_long_pnl(vol_scale=10, label="Long vs global")

dix["pnls"] = naive_pnl 

Simple cumulative PnLs in the class instance can be plotted with the plot_pnls() method. They mainly inform on seasonality and stability of value generation under the assumption of negligible transaction costs.

dix = dict_global

sigx = dix["sigs"]
cidx = dix["cids"]
pnlx = dix["pnls"]
pnls = [sig + "_PZN" for sig in sigx[:2]] + ["Long vs global"]

pnl_global={key + "_PZN": value for key, value in global_labels.items()}
pnl_global_labels = {key: pnl_global[key] for key in list(pnl_global)[:2]}
pnl_global_labels["Long vs global"] = "Long vs global"

pnlx.plot_pnls(
    pnl_cats=pnls,
    pnl_cids=["ALL"],
    title="Macro relative pressure-based naive PnL",
    title_fontsize=16,
    ylab="% of risk capital, for 10% annualized long-term vol, no compounding",
    xcat_labels=pnl_global_labels,
)
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/194c101cc6783458733003c61828dc8c57f0fc63833acce38ae86d1269b0f656.png

The evaluate_pnls() method displays a few characteristic performance metrics of selected signals’ naive PnLs. They illustrate that macro trend pressure-based portfolios generated much higher Sharpe ratios than the long-only portfolio with zero to negative correlation to duration return benchmarks.

dix = dict_global

start = dix["start"]
cidx = dix["cids"]
pnlx = dix["pnls"]
sigx = dix["sigs"]
pnls = [sig + "_PZN" for sig in sigx[:2]] + ["Long vs global"]

df_eval = pnlx.evaluate_pnls(
    pnl_cats=pnls,
    pnl_cids=["ALL"],
    start=start,
)

df_eval = df_eval.rename(columns=pnl_global_labels)

# apply style and heading

df_eval = df_eval.style.format("{:.2f}").set_caption(
    f"Performance metrics of macro pressure-based relative IRS strategies (all DM/EM)"
).set_table_styles(
    [{"selector": "caption", "props": [("text-align", "center"), ("font-weight", "bold"), ("font-size", "17px")]}
])
display(df_eval)
Performance metrics of macro pressure-based relative IRS strategies (all DM/EM)
xcat Long vs global Broad macro trend pressure relative to global basket Growth and inflation trend pressure relative to global basket
Return % -2.93 9.84 3.65
St. Dev. % 10.00 10.00 10.00
Sharpe Ratio -0.29 0.98 0.36
Sortino Ratio -0.41 1.47 0.52
Max 21-Day Draw % -16.83 -13.74 -18.34
Max 6-Month Draw % -30.35 -14.73 -20.87
Peak to Trough Draw % -85.85 -20.18 -39.48
Top 5% Monthly PnL Share -1.22 0.60 1.24
USD_EQXR_NSA correl 0.00 -0.11 -0.07
USD_GB10YXR_NSA correl -0.00 0.08 0.05
Traded Months 269.00 269.00 269.00
dix = dict_global
sigx = dix["sigs"]
cidx = dix["cids"]
freqs = dix["freqs"]

for fr in freqs:

  results = msn.create_results_dataframe(
    title=f"Performance metrics of relative macro pressure signals based strategies, {fr} frequency, for rel vol targeted 2-year IRS returns",
    df=df,
    ret="DU02YXR_VT10vGLB",
    sigs=sigx,
    cids=cidx,
    sig_ops="zn_score_pan",
    sig_adds=0,
    sig_negs=[True] * len(sigx),
    neutrals="zero",
    threshs=2,
    bm="USD_GB10YXR_NSA",
  #  cosp=True,
    start=start,
    blacklist=fxblack,
    freqs=fr,
    agg_sigs="last",
    slip=1,
  )
  display(results)
Performance metrics of relative macro pressure signals based strategies, M frequency, for rel vol targeted 2-year IRS returns
Accuracy Bal. Accuracy Pearson Kendall Sharpe Sortino Market corr.
CPIH_SA_PALLvIETvGLB_NEG 0.520 0.517 0.030 0.030 0.585 0.850 0.049
INTRGDPv5Y_NSA_P1M1ML12_3MMAvGLB_NEG 0.508 0.508 0.008 0.017 0.183 0.259 -0.000
PCREDITBN_SJA_P1M1ML12vLTBvGLB_NEG 0.521 0.519 0.066 0.047 1.011 1.524 0.087
XGHIPCvGLB_NEG 0.526 0.524 0.066 0.050 0.984 1.471 0.084
XGHIvGLB_NEG 0.514 0.511 0.025 0.024 0.365 0.524 0.053
Performance metrics of relative macro pressure signals based strategies, Q frequency, for rel vol targeted 2-year IRS returns
Accuracy Bal. Accuracy Pearson Kendall Sharpe Sortino Market corr.
CPIH_SA_PALLvIETvGLB_NEG 0.531 0.530 0.036 0.034 0.585 0.850 0.049
INTRGDPv5Y_NSA_P1M1ML12_3MMAvGLB_NEG 0.505 0.505 -0.009 0.006 0.183 0.259 -0.000
PCREDITBN_SJA_P1M1ML12vLTBvGLB_NEG 0.540 0.539 0.114 0.083 1.011 1.524 0.087
XGHIPCvGLB_NEG 0.543 0.542 0.100 0.077 0.984 1.471 0.084
XGHIvGLB_NEG 0.517 0.515 0.013 0.022 0.365 0.524 0.053
dix = dict_global
pnlx = dix["pnls"]
sigx = dix["sigs"]

pnlx.signal_heatmap(pnl_name="XGHIPCvGLB_PZN", 
                    freq="q", 
                    title=f"Average applied signal values for PnL based on broad relative macro trend indicator",  # Access the first value 
                    figsize=(15, 8))
https://macrosynergy.com/notebooks.build/data-science/trading-strategies-with-jpmaqs/_images/f7be26b6afdcc7d858d4298e8959cf2153349179f9f9451fea9b32bdd8237802.png