Code Review / Pull Requests requirements

Intro

When code is created, it must go through peer review. Why? Because we're human. We make mistakes. We miss things. That is one of the underlying premises of the indeni platform. To decrease the chances of error in the knowledge we develop, we must employ peer review.

When a knowledge expert finishes work and test on an ind script, or a rule, they will use the "pull request" feature to request a review of their code. A pull request essentially means "I've developed code on a side branch on GIT, and I want to merge it into the main repository".  



Important

Please use Visual Studio code and the Indeni code quality plugin to catch problems as you develop

Required test for PR submission

Section

Topic

Comments

Section

Topic

Comments

Hands on

Add a link to a server with the code deployed

Add live alert

 

Code

Readability





Naming conventions

  • Use snake case for IND, rules, ATE, ARE

  • Use Camel case for classes

  • Use 'some string' for strings and not “some string“

  • For PAN CVE rules, the rule_friendly_nameshould have this convention: <CVE_ID> <The actual title from the vendor>. In some cases, we do not have CVE ID, put the PAN-SA ID (similar as possible to the vendor).
    Example: CVE-2020-2002 PAN-OS: Spoofed Kerberos key distribution center authentication bypass



IND use specific tags, and not generic ones





Test file created in Python to validate your code





Grammar of friendly strings

  • Use official language

  • Use capitals for acronyms (DNS and not dns)

  • Use proper vendor naming (Check Point, not checkpoint



Data is always reported and only once



 

Error handling

  • Validate that received input is valid

  • Parser returned data is valid

  • When possible, validate data integrity (e.g. PAN XML API)

The code must not crash!

Example

class ShowSystemEnvironmentals(BaseParser): def parse(self, raw_data: str, dynamic_var: dict, device_tags: dict): if raw_data: xml_data = helper_methods.parse_data_as_xml(raw_data) if xml_data and xml_data['response']['@status'] == 'success': /* code goes here */

 

 

ATE

  • PAN

    • Use HTTPS requests only

 

Tests

Test doc



 

 

In the UI

Validate all of your work is working properly

  • Validate rule exists in Knowledge Explorer

  • Validate alert is created

  • Validate alert is resolved

  • Review that all friendly info is in place

 

 

Content testing

  • Issue items - test more than 1

 

 

Negative testing

  • Check cases where input is corrupted

 



IND is loaded without failure

  1. Metric can be queries from the device

  2. No load ERROR messages in /usr/share/indeni-collector/logs/collector.log

  3. If new API call - add to collection
    panos_xml_api.json

 

Rules

  • Test with rule-runner tool

    • “rule-runner compile <rule_filename>” to test rule syntax

    • “rule-runner compile <rule_filename> --input <mock_data>” to test different scenarios and validate rule logic

  • Rule appears in Knowledge Explorer for correct vendors



Review logs

Look for ERROR and FAIL in the following log files:

  • /usr/share/indeni-services/logs/parser.log

  • /usr/share/indeni-collector/logs/collector.log

  • /usr/share/indeni-collector/logs/devices/<device_ip>.log

  • /usr/share/indeni/logs/rules



Alert

  • Being created as expected

  • If issue items alert, validate multiple scenarios

  • Alert is being resolved as expected





Auto-Triage

  • Workflow appears in Knowledge Explorer for correct vendors

  • When alert is created, workflow is 

  • All flows are covered (using Workflow Integration Tool (WIT))





Important

Check out this New Common Mistakes Developing Automation Script  page for more details.