Astrology API for AI and LLMs: Context Endpoints Guide #
Large Language Models can generate remarkably compelling astrological interpretations, but only when given accurate, well-structured astrological data as input. General-purpose LLMs do not perform ephemeris calculations, and asking them to determine planetary positions from a birth date produces hallucinated results. The Astrologer API solves this with a dedicated /context endpoint family that returns pre-structured, AI-optimized data specifically designed to be consumed by GPT, Claude, Gemini, and other language models.
This guide covers how the context endpoints work, how to feed their output to various LLMs, and how to build production-grade AI astrology features including chatbots and RAG pipelines.
Why Context Endpoints Exist #
The Astrologer API has two parallel families of data endpoints. The /data endpoints (like /data/subject) return raw JSON with precise numerical values – planetary longitudes to many decimal places, house cusps, aspect orbs, and speed data. This is ideal for building charts, running calculations, or feeding structured databases.
The /context endpoints return the same underlying calculations, but formatted as XML-structured text optimized for LLM consumption. Instead of raw numbers, you get positions expressed in a format that a language model can directly reason about: Sun at 10.84° Capricorn in the Ninth House, direct, speed 1.0195°/day.
Available context endpoints:
- Subject context – basic birth chart placements for personality readings
- Natal chart context – full chart with aspects, distributions, and house analysis
- Synastry context – relationship compatibility data for two people
- Composite context – merged relationship chart analysis
- Transit context – current planetary transits to a natal chart
- Solar return context – yearly forecast data
- Lunar return context – monthly forecast data
Each context endpoint also returns the full chart_data JSON alongside the context string, so you can use both in your application.
Fetching AI-Ready Astrology Data #
Here is how to get a natal chart context that is ready to send to any LLM:
Python #
import requests
url = "https://astrologer.p.rapidapi.com/api/v5/context/birth-chart"
headers = {
"Content-Type": "application/json",
"X-RapidAPI-Key": "YOUR_API_KEY",
"X-RapidAPI-Host": "astrologer.p.rapidapi.com"
}
payload = {
"subject": {
"name": "Elena",
"year": 1995,
"month": 8,
"day": 12,
"hour": 16,
"minute": 45,
"city": "Rome",
"nation": "IT",
"longitude": 12.4964,
"latitude": 41.9028,
"timezone": "Europe/Rome"
}
}
response = requests.post(url, json=payload, headers=headers)
data = response.json()
# The context field contains the AI-optimized XML string
astro_context = data["context"]
# The chart_data field contains the full JSON data
chart_data = data["chart_data"]
print(astro_context[:500])
JavaScript #
const url = "https://astrologer.p.rapidapi.com/api/v5/context/birth-chart";
const payload = {
subject: {
name: "Elena",
year: 1995,
month: 8,
day: 12,
hour: 16,
minute: 45,
city: "Rome",
nation: "IT",
longitude: 12.4964,
latitude: 41.9028,
timezone: "Europe/Rome",
},
};
const response = await fetch(url, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-RapidAPI-Key": "YOUR_API_KEY",
"X-RapidAPI-Host": "astrologer.p.rapidapi.com",
},
body: JSON.stringify(payload),
});
const data = await response.json();
// AI-optimized XML context
const astroContext = data.context;
// Full JSON chart data
const chartData = data.chart_data;
console.log(astroContext.substring(0, 500));
The returned context string looks like this (truncated):
<chart_analysis type="Natal">
<chart name="Elena">
<birth_data date="1995-08-12 16:45" city="Rome" nation="IT"
lat="41.90" lng="12.50" tz="Europe/Rome" />
<config zodiac="Tropical" house_system="Placidus"
perspective="Apparent Geocentric" />
<planets>
<point name="Sun" position="19.42" sign="Leo"
abs_pos="139.42" quality="Fixed" element="Fire"
house="Ninth House" motion="direct" speed="0.9613" />
<point name="Moon" position="7.88" sign="Taurus"
abs_pos="37.88" quality="Fixed" element="Earth"
house="Fifth House" motion="direct" speed="12.31" />
...
</planets>
<aspects count="58">
<aspect type="square" p1="Sun" p2="Pluto"
orb="2.14" angle="90" movement="separating" />
...
</aspects>
</chart>
<element_distribution fire="35%" earth="25%"
air="20%" water="20%" />
<quality_distribution cardinal="30%" fixed="45%"
mutable="25%" />
</chart_analysis>
This XML structure provides LLMs with a complete, unambiguous representation of the chart that they can interpret without any astronomical calculation.
Feeding Astrology Data to LLMs #
Using with OpenAI (GPT) #
import requests
from openai import OpenAI
# Step 1: Get astrology context from the API
astro_url = "https://astrologer.p.rapidapi.com/api/v5/context/birth-chart"
astro_headers = {
"Content-Type": "application/json",
"X-RapidAPI-Key": "YOUR_API_KEY",
"X-RapidAPI-Host": "astrologer.p.rapidapi.com"
}
astro_response = requests.post(astro_url, json={
"subject": {
"name": "Elena",
"year": 1995, "month": 8, "day": 12,
"hour": 16, "minute": 45,
"city": "Rome", "nation": "IT",
"longitude": 12.4964, "latitude": 41.9028,
"timezone": "Europe/Rome"
}
}, headers=astro_headers).json()
astro_context = astro_response["context"]
# Step 2: Send to GPT with the astrology data as context
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "system",
"content": (
"You are a professional astrologer. Interpret the birth chart data "
"provided and give a warm, insightful personality reading. Focus on "
"the Sun, Moon, and Ascendant as the core triad, then discuss "
"notable aspects. Keep the tone encouraging but honest."
)
},
{
"role": "user",
"content": f"Here is my birth chart data:\n\n{astro_context}\n\n"
f"Please give me a personality reading based on this chart."
}
]
)
print(completion.choices[0].message.content)
Using with Anthropic (Claude) #
import requests
import anthropic
# Step 1: Get astrology context (same as above)
astro_response = requests.post(
"https://astrologer.p.rapidapi.com/api/v5/context/birth-chart",
json={
"subject": {
"name": "Elena",
"year": 1995, "month": 8, "day": 12,
"hour": 16, "minute": 45,
"city": "Rome", "nation": "IT",
"longitude": 12.4964, "latitude": 41.9028,
"timezone": "Europe/Rome"
}
},
headers={
"Content-Type": "application/json",
"X-RapidAPI-Key": "YOUR_API_KEY",
"X-RapidAPI-Host": "astrologer.p.rapidapi.com"
}
).json()
astro_context = astro_response["context"]
# Step 2: Send to Claude
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1500,
system=(
"You are a professional astrologer. Interpret the birth chart data "
"provided and give a warm, insightful personality reading. Base every "
"claim on the actual planetary positions in the data."
),
messages=[
{
"role": "user",
"content": f"Here is my birth chart data:\n\n{astro_context}\n\n"
f"Please give me a personality reading."
}
]
)
print(message.content[0].text)
Building an AI Astrology Chatbot #
A production chatbot needs to handle multiple types of questions. Here is a pattern that uses different context endpoints depending on the user’s request:
import requests
BASE_URL = "https://astrologer.p.rapidapi.com/api/v5"
HEADERS = {
"Content-Type": "application/json",
"X-RapidAPI-Key": "YOUR_API_KEY",
"X-RapidAPI-Host": "astrologer.p.rapidapi.com"
}
def get_context_for_query(query_type, user_data, partner_data=None):
"""Fetch the appropriate astrology context based on query type."""
if query_type == "personality":
resp = requests.post(
f"{BASE_URL}/context/birth-chart",
json={"subject": user_data},
headers=HEADERS
)
return resp.json()["context"]
elif query_type == "compatibility" and partner_data:
resp = requests.post(
f"{BASE_URL}/context/synastry",
json={
"first_subject": user_data,
"second_subject": partner_data
},
headers=HEADERS
)
return resp.json()["context"]
elif query_type == "transits":
resp = requests.post(
f"{BASE_URL}/context/transit",
json={"subject": user_data},
headers=HEADERS
)
return resp.json()["context"]
elif query_type == "yearly_forecast":
resp = requests.post(
f"{BASE_URL}/context/solar-return",
json={"subject": user_data},
headers=HEADERS
)
return resp.json()["context"]
elif query_type == "monthly_forecast":
resp = requests.post(
f"{BASE_URL}/context/lunar-return",
json={"subject": user_data},
headers=HEADERS
)
return resp.json()["context"]
return None
You can then route user questions to the appropriate context endpoint before passing both the question and the context to your LLM of choice.
Structured Data for RAG Pipelines #
For Retrieval-Augmented Generation systems, you may want to store astrological data in a vector database alongside interpretive text. The context endpoints work well for this because they produce consistent, structured output that can be chunked and embedded.
A practical approach:
import requests
BASE_URL = "https://astrologer.p.rapidapi.com/api/v5"
HEADERS = {
"Content-Type": "application/json",
"X-RapidAPI-Key": "YOUR_API_KEY",
"X-RapidAPI-Host": "astrologer.p.rapidapi.com"
}
def build_rag_documents(user_data):
"""Create embeddable documents from a user's chart for RAG."""
# Get the full natal context
natal = requests.post(
f"{BASE_URL}/context/birth-chart",
json={"subject": user_data},
headers=HEADERS
).json()
# Get the subject context (simpler, personality-focused)
subject = requests.post(
f"{BASE_URL}/context/subject",
json={"subject": user_data},
headers=HEADERS
).json()
documents = []
# Document 1: Core personality (from subject context)
documents.append({
"id": f"{user_data['name']}_personality",
"content": subject["subject_context"],
"metadata": {
"type": "personality",
"sun_sign": natal["chart_data"]["subject"]["sun"]["sign"],
"moon_sign": natal["chart_data"]["subject"]["moon"]["sign"],
}
})
# Document 2: Full chart analysis (from natal context)
documents.append({
"id": f"{user_data['name']}_natal_chart",
"content": natal["context"],
"metadata": {
"type": "natal_chart",
"aspect_count": len(natal["chart_data"]["aspects"]),
}
})
return documents
These documents can then be embedded and stored in Pinecone, Weaviate, ChromaDB, or any vector store. When a user asks a question, you retrieve the relevant chart documents and include them in the LLM prompt.
Effective Prompts for Astrology LLM Applications #
The quality of AI-generated readings depends heavily on prompt design. Here are patterns that produce consistently good results:
For personality readings, instruct the model to anchor every observation in specific chart placements:
“You are a professional astrologer interpreting a natal chart. For each personality trait you describe, cite the specific planetary placement or aspect that supports it. Start with the Sun-Moon-Ascendant triad, then cover dominant aspects.”
For compatibility readings, ask the model to balance positives and challenges:
“Analyze this synastry data as a relationship counselor with astrological expertise. Identify the top 3 strengths and top 3 challenges in this pairing, citing specific inter-aspects. End with practical advice for navigating the challenges.”
For transit readings, emphasize timing and actionable guidance:
“Based on these current transits to the natal chart, provide a forecast for the coming weeks. For each significant transit, explain what area of life it affects, when it peaks, and one concrete action the person can take.”
Combining Context with JSON Data #
A powerful pattern is to use the context string for LLM interpretation while using the JSON chart_data for structured display in your UI:
# From a single API call, you get both
response = requests.post(
f"{BASE_URL}/context/birth-chart",
json={"subject": user_data},
headers=HEADERS
).json()
# Send context to LLM for natural language interpretation
llm_input = response["context"]
# Use chart_data for structured UI elements
chart_data = response["chart_data"]
sun_sign = chart_data["subject"]["sun"]["sign"]
moon_sign = chart_data["subject"]["moon"]["sign"]
element_dist = chart_data["element_distribution"]
This way, a single API call powers both your AI-generated text and your structured data displays.
Cost and Performance Considerations #
Context endpoints return more data than simple subject endpoints because they include both the XML context string and the full chart data JSON. For high-traffic applications:
- Cache context results aggressively. A person’s natal chart context never changes.
- Use the simpler subject context endpoint when you only need basic personality data, and the full natal context endpoint when you need aspects and distributions.
- For transit contexts, cache with a reasonable TTL since planetary positions change slowly (daily updates are sufficient for most use cases).
Visit the Astrologer API landing page for the complete endpoint reference. Get your API key on RapidAPI to start building AI-powered astrology features.