Skip to content

Issue: had to use same system name for intersect client -> proxy server -> proxy client -> intersect service #7

@marshallmcdonnell

Description

@marshallmcdonnell

This was issue we experienced when we had setup for AWS <-> ORC intersect instances.

Diagram of setup:

Image

Issues:

  • [primary]: had to use intersect service name ornl.mdf.roost on AWS intersect client, AWS proxy client, and ORNL proxy client; this did not seem correct in how we want it to behave; Yet, pragmatically trying all the different hierarchy options, this was only way to get messages over the broker proxy setup
    • Setup: AWS intersect client -> AWS proxy service -> ORNL proxy client -> ORNL INTERSECT service
    • The system name was dictated by the ORNL INTERSECT service
    • Example of hierarchy for the AWS intersect client was:
      # Hierarchy for this client
      # NOTE: for proxy hop, MUST MATCH ORNL ONE (weird, I know...)
      hierarchy_client:
        organization: ornl
        facility: mdf
        system: roost
      
    • Example of ORNL proxy client helm chart values.yaml section
    aws-proxy-http-client:
    enabled: true
    image:
    tag: "9284ad6360fd3a1463ad2a37a23642044b952037" # last version before MQTT protocol refactor
    app:
    topic_prefix: "ornl.mdf.roost."
    log_level: "debug"
    broker:
    username: *messageBroker-username
    host: *messageBroker-internalHost
    protocol: "amqp"
    password:
    isSecret: true
    secretName: *messageBroker-password
    secretKey: *messageBroker-password-key
    other_proxy:
    url: "https://intersect.genesismission.click"
    username: "REDACTED"
    password: "" # gets set for real in the secrets file
    - ORNL INTERSECT service is just like normal SDK v0.8 hierarchy definition, can provide if needed.
    
    
  • [secondary]: With the current setup above, the big issue was the messages sent from the AWS intersect client were duplicated on the ORNL broker (i.e. ~10-100 intersect messages created in ~1 second; intersect services were able to ignore messages at the application level to allow this to work but only worked approximately 2 / 3 of the time); So this is probably not the main issue! And if we solve the primary one, we can test if this is fixed

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions