Ornitela

Website: https://cpanel.glosendas.net/ 

Prerequisites

Ornitela

You will need:

  • An account with them, with Admin privileges.   

Gundi

  • Gundi SFTP account details.

EarthRanger

To send movement data to EarthRanger, you will need:

  • A special EarthRanger user (e.g., “Gundi Service Account”) for this integration. Please refer to EarthRanger's documentation or contact Support.
  • A long-lived token for authentication assigned to the user created in the previous step. Please refer to EarthRanger's documentation or contact Support.

Configuration

Integration Requires Assistance

Self-Service integration not available yet, please contact our Support Team.

This integration requires assistance from our support team for setup and configuration. Please contact our support team at support@earthranger.com and we’ll guide you through the process to ensure everything is set up correctly.

We are actively working to make this integration self-service in the future. Stay tuned for updates!

GUIDES

1. Gundi SFTP Server Settings

Notes for Support (click to expand)

Log in to the Gundi SFTP Admin Panel.

Go to Users

Use an existing user as a Template (Actions > Use as Template)

Update the Root directory and Key Prefix with the correct name.

Click Save.

 
 

2. OrniTrack Settings

Notes for Support (click to expand)

Go to OrniTrack.

Go to Settings.

Enable data upload to FTP/SFTP server.

Select CSV 2 as the data file format.

Use sftp.gundiservice.org as the Host name.

Use 2022 as the Port number.

Use / as the Remote Directory.

Configure the username and password used by the Gundi SFTP account.

 
 

3. Gundi Connection Settings

3.1 Log in to Gundi.

3.2 Click Create Connection.

3.3 Select Ornitela.

3.4 Select your Organization.

3.5 Enter a Connection Name (e.g., Ornitela to ER).

3.6 Bucket Path: ornitela/[Gundi SFTP Username]

3.7 Use the default values for all other configurations, and adjust them as needed.

3.10 Click Next.

3.11 Select a Destination. Please consult our guides.

4. EarthRanger Settings

The following configurations are optional:

4.1. Select the data you want to see in the map.

4.1.1 Log in to EarthRanger as an admin (site.pamdas.org/admin).
4.1.2 Go to Source Providers.
4.1.3 Select the Source Provider created for this integration (e.g., "gundi_ornitela_UUID")

The “Source Provider” is automatically created after Gundi successfully retrieves data from this integration and pushes it to EarthRanger. If you don't find it, please ensure your Connection includes a Destination, and review the Connection Activity Logs for additional details. You should see some records mentioning that Gundi successfully delivered data to EarthRanger.
Important Note:  The integration might take 5 minutes to run. Please verify your connection after this period.

4.1.4 Click on Subject Details Configuration.
4.1.5 Configure the additional information you would like to see on the map.

4.2 Assign your Subjects to a Subject Group.

Please refer to EarthRanger's documentation or contact Support.


Note on Latency

The time it takes for data to appear in your destination system depends on several factors, such as latency introduced by the source, network conditions, and intermediary systems. While these factors may vary, Gundi typically checks for available data at scheduled intervals (approximately every 10 minutes).

If data is not available in your system after this interval, please check the source of the data for its availability. If the issue persists, contact our Support team for assistance.


Troubleshooting


Q: How does file processing work end-to-end?
Files move through three bucket folders in sequence:

  1. Root (ornitela/) — Ornitela drops raw CSV files here. The connector checks this every 5 minutes.
  2. in_progress/ — When processing starts, the connector carves a chunk from the root file, compresses
      it, and places it here. The root file is updated with remaining rows (or deleted if empty).
  3. archive/ — On success, the chunk is moved here. If the root file still has rows, the next chunk is
      triggered automatically (chaining).
  4. dead_letter/ — On failure or timeout, the chunk is moved here instead.

Files named like bird001_a3f9c1b2_2.csv.gz encode: <original_stem>_<chain_id>_<chunk_index>.csv.gz.


Q: How do I trace a single file's full processing history in logs?
Every log line for a chunk is tagged with the filename in square brackets:

   [bird001_a3f9c1b2_1.csv.gz] Starting processing for integration 97eed06a-...
   [bird001_a3f9c1b2_1.csv.gz] Processing in_progress/bird001_a3f9c1b2_1.csv.gz: 3000 rows
   [bird001_a3f9c1b2_1.csv.gz] Archived successfully
   [bird001_a3f9c1b2_1.csv.gz] Processed: extracted 3000 records, sent 3000 observations

To trace an entire chain (all chunks from one source file), filter logs by the chain_id (the 8-character
hex segment, e.g. a3f9c1b2). All chunks carved from the same source file share the same chain_id.

In Cloud Logging / GCP, use:

   textPayload=~"a3f9c1b2"


Q: A file ended up in dead_letter/. What happened?
Check the logs for the filename. Common causes:

  • “Timed out — moving to dead_letter/”. Processing exceeded MAX_ACTION_EXECUTION_TIME (default: 8 min). File may be too large, Gundi API was slow, or Cloud Run killed the container.
  • "Error: <exception>" + “Successfully moved to dead_letter/”. Unhandled exception during CSV parsing, download, or observation sending. See exception on preceding line.
  • “Could not move <file> to dead_letter/ — file remains in    in_progress/”. The dead-letter move itself failed (GCS permission or network issue). File is still in in_progress/.

Retrying dead-letter files:  Move the file back to the root folder (ornitela/). The cron job will pick it up on the next 5-minute tick.

 

Q: Files are piling up in in_progress/ and not moving anywhere.
This usually means the processing action was interrupted before it could archive or dead-letter the file.

Causes:

  • Cloud Run SIGTERM at 5 minutes: if PROCESS_PUBSUB_MESSAGES_IN_BACKGROUND is not "true", Cloud Run kills the container at the request timeout (300s) before the action completes. The file stays in in_progress/ indefinitely.
  • Container restart / OOM: same result — no cleanup path ran.

To diagnose:
Check if the file has been in in_progress/ longer than 10 minutes with no log activity. If so, move it to root to retry.

Permanent fix:
Set PROCESS_PUBSUB_MESSAGES_IN_BACKGROUND=true in Cloud Run.

 

Q: process_new_files reports new_files_found: 0 but files exist in the bucket.
The action only counts root-level files — files not under in_progress/, archive/, or dead_letter/. Files already being processed or archived are intentionally excluded.

If the root appears empty but you expect files:

  1. Check that files were uploaded to the correct bucket and bucket_path prefix (default: ornitela/).
  2. Verify the INFILE_STORAGE_BUCKET env var is set to the correct bucket.
  3. Look for log lines: "Could not get metadata for file <name>" — this indicates a permissions or network issue during listing.

 

Q: process_new_files returns success immediately but observations haven't arrived yet.
This is expected behavior. process_new_files returns as soon as the chunk sub-actions are triggered — it does not wait for them to finish.

The actual observations are sent by action_process_ornitela_file, which runs asynchronously.

Check logs for:
[<chunk_name>] Processed: extracted N records, sent N observations

 

Q: Observations are arriving but they're all filtered out (nothing appears in EarthRanger).
Check generate_gundi_observations filtering. Rows are dropped if:

  • recorded_at is older than historical_limit_days (default: 5 days). If the file contains older historical data, increase this value in the action config.
  • The UTC_datetime field is missing or malformed. The row will raise a ValueError and be skipped, with a log like:
    Error parsing CSV row in bird001_a3f9c1b2_1.csv.gz: time data '' does not match format '%Y-%m-%d %H:%M:%S'

 

Q: The chain stopped — only chunk 1 was processed, chunks 2+ never appeared.
The self-chaining mechanism triggers the next chunk from _trigger_next_chunk after archiving chunk N.

If the chain breaks:

  1. Lock contention: "Source file is locked — cron will handle the next chunk"
    This is normal. The 5-minute cron will pick up the root file on the next run.
  2. Source file 404: "Source file no longer exists — chain complete"
    The root file was already consumed or deleted. The chain finished normally.
  3. Trigger error: "Error triggering next chunk: <error>"
    The trigger_action call failed. The root file still exists with remaining rows, so the cron will retry.
  4. Cron picked it up independently:
    Both the self-chain and cron attempted processing. The lock prevented duplication, and the successful chain ID appears in logs.

 

Q: How do I check how many observations a specific run sent?
The activity log in the Gundi portal shows per-chunk results. Each process_ornitela_file action logs:

[<chunk_name>] Processed: extracted N records, sent N observations

The activity log entry includes chain_id as structured metadata, allowing filtering of all chunks from one source file.

 

Q: The cleanup action deleted files it shouldn't have, or isn't deleting anything.
action_cleanup_archive runs at midnight and deletes files from archive/ older than delete_after_archive_days (default: 3 days), based on GCS timeCreated.

  • Deleting too aggressively: increase delete_after_archive_days in the action config.
  • Not deleting anything: ensure the CleanupArchive action config exists and the value is set correctly. Look for:
    Deleted old archived file: archive/<name>
  • Errors during cleanup:
    "Could not process archived file archive/<name>: <error>"
    Individual file errors are logged as warnings and skipped. The rest continue.

 

Q: What do the log levels mean for this connector?
Level: INFO
Meaning: Normal progress such as chunk creation, row parsing, archiving, and triggering the next chunk

Level: WARNING
Meaning: Recoverable issues such as metadata fetch failures, transient GCS interruptions, or file locks

Level: ERROR
Meaning: File sent to dead letter, failure moving to dead letter, or unhandled exceptions

Level: DEBUG
Meaning: Encoding detection per file (visible only if LOGGING_LEVEL=DEBUG)

 

Data Provider,  Animal Tracking, Movement Data, Pull Integration
April 13, 2026