How to

Train, evaluate, monitor, infer: End-to-end machine learning in Elastic

Machine learning pipelines have evolved tremendously in the past several years. With a wide variety of tools and frameworks out there to simplify building, training, and deployment, the turnaround time on machine learning model development has improved drastically. However, even with all these simplifications, there is still a steep learning curve associated with a lot of these tools. But not with Elastic.

In order to use machine learning in the Elastic Stack, all you really need is for your data to be stored in Elasticsearch. Once there, extracting valuable insights from your data is as simple as clicking a few buttons in Kibana. Machine learning is baked into the Elastic Stack, allowing you to easily and intuitively build a fully operational end-to-end machine learning pipeline. And in this blog, we’ll do just that.

Why Elastic?

Being a search company means that Elastic is built to efficiently handle large amounts of data. Searching and aggregating data for analysis is made simple and intuitive using Elasticsearch Query DSL. Large data sets can be visualized in a variety of ways in Kibana. The Elastic machine learning interface allows for easy feature and model selection, model training, and hyperparameter tuning. And after you’ve trained and tuned your model, Kibana can also be used to evaluate and visually monitor models. This makes the Elastic Stack the perfect one-stop shop for production-level machine learning.

Example data set: EMBER 2018

We’re going to demonstrate end-to-end machine learning in the Elastic Stack using the EMBER data set, released by Endgame to enable malware detection using static features derived from portable executable (PE) files. For the purpose of this demonstration, we will use the EMBER (Endgame Malware BEnchmark for Research) 2018 data set, which is an open source collection of 1 million samples. Each sample includes the sha256 hash of the sample file, the month the file was first seen, a label, and the features derived from the file. 

For this experiment, we will select 300K samples (150K malicious and 150K benign) from the EMBER 2018 data set. In order to perform supervised learning on the samples, we must first select some features. The features in the data set are static features derived from the content of binary files. We decided to experiment with the general, file header and section information, strings, and byte histograms in order to study model performance on using different subsets of features of the EMBER data set. 

End-to-end machine learning in the Elastic Stack: A walkthrough

For the purposes of this demo, we will use the Python Elasticsearch Client to insert data into Elasticsearch, Elastic machine learning’s data frame analytics feature to create training jobs, and Kibana to visually monitor models after training. 

We will create two supervised jobs, one using general, file header and section information, and strings, and another using just byte histograms as features. This is to demonstrate simultaneous multiple model training in the Stack and visualization of multiple candidate models later.

Elasticsearch setup

In order to use machine learning in the Elastic Stack, we first need to spin up Elasticsearch with a machine learning node. For this, we can start a 14-day free trial of Elastic Cloud that is available for anyone to try. Our example deployment has the following settings:

  • Cloud Platform: Amazon Web Services
  • Region: US West (N. California)
  • Optimization: I/O Optimized
  • Customize Deployment: Enable Machine Learning

We also need to create API keys and assign them appropriate privileges to interact with Elasticsearch using the Python Elasticsearch Client. For our walkthrough, we will be inserting data into the ember_ml index, so we’ll create a key as follows:

POST /_security/api_key 
{
"name": "my_awesome_key",
"role_descriptors": {
"role_1": {
"cluster": ["all"],
"index": [
{
"names": ["ember_*"],
"privileges": ["all"]
}
]
}
}
}

Data ingest

Once we have our Elasticsearch instance set up, we’ll begin by ingesting data into an Elasticsearch index. First, we will create an index called ember_ml and then we will ingest the documents that make up our data set into it using the Python Elasticsearch Client. We will ingest all the features required for both models into a single index, using the Streaming Bulk Helper in order to bulk ingest documents into Elasticsearch. The Python code to create the ember_ml index and bulk ingest documents into it is as follows:

import elasticsearch 
import certifi
from elasticsearch import Elasticsearch, helpers
# Long list of documents to be inserted into Elasticsearch, showing one as an example
documents = [
{
"_index": "ember_ml",
"_id": "771434adbbfa2ff5740eb91d9deb51828e0f4b060826b590cd9fd8dd46ee0d40",
"_source": {
"sha256": "771434adbbfa2ff5740eb91d9deb51828e0f4b060826b590cd9fd8dd46ee0d4b",
"appeared": "2018-01-06 00:00:00",
"label": 1,
"byte_0": 0.1826012283563614,
"byte_1": 0.006036404054611921,
"byte_2": 0.003830794943496585,
"byte_3": 0.004225482698529959,
"byte_4": 0.004388001281768084,
"byte_5": 0.0036218424793332815,
"byte_6": 0.0035289747174829245,
"byte_7": 0.004666604567319155,
"byte_8": 0.004225482698529959,
"byte_9": 0.0029253342654556036,
"byte_10": 0.0034361069556325674,
"byte_11": 0.003993313293904066,
"byte_12": 0.004039747174829245,
"byte_13": 0.0029253342654556036,
"byte_14": 0.0030182020273059607,
"byte_15": 0.0036450594197958708,
"byte_16": 0.004573736805468798,
"byte_17": 0.002693164860829711,
"byte_18": 0.002507429337128997,
"byte_19": 0.0026699479203671217,
"byte_20": 0.003505757777020335,
"byte_21": 0.0022056091111153364,
"byte_22": 0.0032503714319318533,
"byte_23": 0.0025770801585167646,
"byte_24": 0.005363112781196833,
"byte_25": 0.002600297098979354,
"byte_26": 0.0025538632180541754,
"byte_27": 0.0031807206105440855,
"byte_28": 0.0034593238960951567,
"byte_29": 0.0022288260515779257,
"byte_30": 0.002507429337128997,
"byte_31": 0.0025770801585167646,
"byte_32": 0.004921990912407637,
"byte_33": 0.0028092495631426573,
"byte_34": 0.0017877042992040515,
"byte_35": 0.0033664561342447996,
"byte_36": 0.002437778515741229,
"byte_37": 0.0021359582897275686,
"byte_38": 0.0016716195968911052,
"byte_39": 0.0020430905278772116,
"byte_40": 0.003227154491469264,
"byte_41": 0.0025770801585167646,
"byte_42": 0.0017644873587414622,
"byte_43": 0.0032039375510066748,
"byte_44": 0.003296805312857032,
"byte_45": 0.003134286729618907,
"byte_46": 0.0028324665036052465,
"byte_47": 0.003505757777020335,
"byte_48": 0.0038772288244217634,
"byte_49": 0.0035521916579455137,
"byte_50": 0.0031110697891563177,
"byte_51": 0.00417904881760478,
"byte_52": 0.004225482698529959,
"byte_53": 0.0032503714319318533,
"byte_54": 0.0035289747174829245,
"byte_55": 0.003320022253319621,
"byte_56": 0.0030878528486937284,
"byte_57": 0.003575408598408103,
"byte_58": 0.002182392170652747,
"byte_59": 0.0029021173249930143,
"byte_60": 0.002344910753890872,
"byte_61": 0.0020430905278772116,
"byte_62": 0.0015555348945781589,
"byte_63": 0.0020198735874146223,
"byte_64": 0.004016530234366655,
"byte_65": 0.004457652103155851,
"byte_66": 0.0036450594197958708,
"byte_67": 0.0036218424793332815,
"byte_68": 0.0038075780030339956,
"byte_69": 0.0033432391937822104,
"byte_70": 0.004852340091019869,
"byte_71": 0.004039747174829245,
"byte_72": 0.00480590621009469,
"byte_73": 0.002971768146380782,
"byte_74": 0.002693164860829711,
"byte_75": 0.0039468794129788876,
"byte_76": 0.0036450594197958708,
"byte_77": 0.0034361069556325674,
"byte_78": 0.0028324665036052465,
"byte_79": 0.0028324665036052465,
"byte_80": 0.005664933007210493,
"byte_81": 0.0029949850868433714,
"byte_82": 0.0031110697891563177,
"byte_83": 0.004527302924543619,
"byte_84": 0.003923662472516298,
"byte_85": 0.0029949850868433714,
"byte_86": 0.004016530234366655,
"byte_87": 0.004573736805468798,
"byte_88": 0.004109397996217012,
"byte_89": 0.003296805312857032,
"byte_90": 0.0033664561342447996,
"byte_91": 0.0034593238960951567,
"byte_92": 0.0031110697891563177,
"byte_93": 0.0022984768729656935,
"byte_94": 0.0022288260515779257,
"byte_95": 0.002275259932503104,
"byte_96": 0.002855683444067836,
"byte_97": 0.0035986255388706923,
"byte_98": 0.0026699479203671217,
"byte_99": 0.0037843610625714064,
"byte_100": 0.004364784341305494,
"byte_101": 0.004016530234366655,
"byte_102": 0.004713038448244333,
"byte_103": 0.003505757777020335,
"byte_104": 0.005479197483509779,
"byte_105": 0.0032503714319318533,
"byte_106": 0.00366827636025846,
"byte_107": 0.004016530234366655,
"byte_108": 0.005061292555183172,
"byte_109": 0.005014858674257994,
"byte_110": 0.0039468794129788876,
"byte_111": 0.004109397996217012,
"byte_112": 0.004596953745931387,
"byte_113": 0.0021127413492649794,
"byte_114": 0.0046433876268565655,
"byte_115": 0.004086181055754423,
"byte_116": 0.005664933007210493,
"byte_117": 0.005293461959809065,
"byte_118": 0.0039468794129788876,
"byte_119": 0.0038075780030339956,
"byte_120": 0.0035289747174829245,
"byte_121": 0.004480869043618441,
"byte_122": 0.00183413818012923,
"byte_123": 0.0032503714319318533,
"byte_124": 0.0027163818012923002,
"byte_125": 0.002066307468339801,
"byte_126": 0.003505757777020335,
"byte_127": 0.002252042992040515,
"byte_128": 0.0033432391937822104,
"byte_129": 0.0032039375510066748,
"byte_130": 0.001741270418278873,
"byte_131": 0.003923662472516298,
"byte_132": 0.003830794943496585,
"byte_133": 0.0033664561342447996,
"byte_134": 0.0034361069556325674,
"byte_135": 0.0014162332518026233,
"byte_136": 0.002600297098979354,
"byte_137": 0.00304141896776855,
"byte_138": 0.0022984768729656935,
"byte_139": 0.0037147102411836386,
"byte_140": 0.0051773772574961185,
"byte_141": 0.003296805312857032,
"byte_142": 0.0031575036700814962,
"byte_143": 0.0015555348945781589,
"byte_144": 0.003064635908231139,
"byte_145": 0.002693164860829711,
"byte_146": 0.0012304977281019092,
"byte_147": 0.0015555348945781589,
"byte_148": 0.003830794943496585,
"byte_149": 0.0028092495631426573,
"byte_150": 0.00208952440880239,
"byte_151": 0.0014626671327278018,
"byte_152": 0.0026699479203671217,
"byte_153": 0.004388001281768084,
"byte_154": 0.0019502228824421763,
"byte_155": 0.0017644873587414622,
"byte_156": 0.004086181055754423,
"byte_157": 0.0017180534778162837,
"byte_158": 0.003412890015169978,
"byte_159": 0.002252042992040515,
"byte_160": 0.002507429337128997,
"byte_161": 0.002437778515741229,
"byte_162": 0.002623514039441943,
"byte_163": 0.0022288260515779257,
"byte_164": 0.0020430905278772116,
"byte_165": 0.0022984768729656935,
"byte_166": 0.0017180534778162837,
"byte_167": 0.0010911960853263736,
"byte_168": 0.002159175230190158,
"byte_169": 0.0015091010136529803,
"byte_170": 0.003227154491469264,
"byte_171": 0.0025770801585167646,
"byte_172": 0.0027628156822174788,
"byte_173": 0.0029253342654556036,
"byte_174": 0.0013697993708774447,
"byte_175": 0.001648402656428516,
"byte_176": 0.003134286729618907,
"byte_177": 0.0016019687755033374,
"byte_178": 0.002437778515741229,
"byte_179": 0.001927005941979587,
"byte_180": 0.0027163818012923002,
"byte_181": 0.004016530234366655,
"byte_182": 0.003227154491469264,
"byte_183": 0.00241456157527864,
"byte_184": 0.0025538632180541754,
"byte_185": 0.00208952440880239,
"byte_186": 0.001648402656428516,
"byte_187": 0.002275259932503104,
"byte_188": 0.0025538632180541754,
"byte_189": 0.0028092495631426573,
"byte_190": 0.0021359582897275686,
"byte_191": 0.0027395987417548895,
"byte_192": 0.0030878528486937284,
"byte_193": 0.0027395987417548895,
"byte_194": 0.00208952440880239,
"byte_195": 0.002878900384530425,
"byte_196": 0.0021359582897275686,
"byte_197": 0.00208952440880239,
"byte_198": 0.0027395987417548895,
"byte_199": 0.0019734397064894438,
"byte_200": 0.003064635908231139,
"byte_201": 0.002066307468339801,
"byte_202": 0.0012304977281019092,
"byte_203": 0.00183413818012923,
"byte_204": 0.003389673074707389,
"byte_205": 0.00304141896776855,
"byte_206": 0.0029021173249930143,
"byte_207": 0.0024609954562038183,
"byte_208": 0.0029021173249930143,
"byte_209": 0.002507429337128997,
"byte_210": 0.0022288260515779257,
"byte_211": 0.0019734397064894438,
"byte_212": 0.0023913446348160505,
"byte_213": 0.0017180534778162837,
"byte_214": 0.0032735883723944426,
"byte_215": 0.0023216938134282827,
"byte_216": 0.003412890015169978,
"byte_217": 0.0025538632180541754,
"byte_218": 0.002530646277591586,
"byte_219": 0.004550519865006208,
"byte_220": 0.003320022253319621,
"byte_221": 0.002437778515741229,
"byte_222": 0.003389673074707389,
"byte_223": 0.002855683444067836,
"byte_224": 0.0031575036700814962,
"byte_225": 0.0018109212396666408,
"byte_226": 0.002182392170652747,
"byte_227": 0.003737927181646228,
"byte_228": 0.0036218424793332815,
"byte_229": 0.0014626671327278018,
"byte_230": 0.0024609954562038183,
"byte_231": 0.002600297098979354,
"byte_232": 0.0024609954562038183,
"byte_233": 0.0015323179541155696,
"byte_234": 0.001137629966251552,
"byte_235": 0.004341567400842905,
"byte_236": 0.004782689269632101,
"byte_237": 0.0024609954562038183,
"byte_238": 0.0016716195968911052,
"byte_239": 0.0028092495631426573,
"byte_240": 0.0036218424793332815,
"byte_241": 0.00183413818012923,
"byte_242": 0.0035289747174829245,
"byte_243": 0.002623514039441943,
"byte_244": 0.0022984768729656935,
"byte_245": 0.001741270418278873,
"byte_246": 0.003296805312857032,
"byte_247": 0.003412890015169978,
"byte_248": 0.003134286729618907,
"byte_249": 0.0023913446348160505,
"byte_250": 0.0012304977281019092,
"byte_251": 0.0067561292089521885,
"byte_252": 0.005943536292761564,
"byte_253": 0.0031575036700814962,
"byte_254": 0.004480869043618441,
"byte_255": 0.038958024233579636,
"strings_0": 488,
"strings_1": 7.477458953857422,
"strings_2": 3649,
"strings_3": 0.011784050613641739,
"strings_4": 0.0043847630731761456,
"strings_5": 0.003562619909644127,
"strings_6": 0.005206905771046877,
"strings_7": 0.004110715351998806,
"strings_8": 0.003014524467289448,
"strings_9": 0.003562619909644127,
"strings_10": 0.005755001213401556,
"strings_11": 0.006029048934578896,
"strings_12": 0.003014524467289448,
"strings_13": 0.0019183338154107332,
"strings_14": 0.010961906984448433,
"strings_15": 0.006577144376933575,
"strings_16": 0.006851192098110914,
"strings_17": 0.008769526146352291,
"strings_18": 0.013428336940705776,
"strings_19": 0.011784050613641739,
"strings_20": 0.012058097869157791,
"strings_21": 0.014250479638576508,
"strings_22": 0.013428336940705776,
"strings_23": 0.01315428875386715,
"strings_24": 0.01068785972893238,
"strings_25": 0.01315428875386715,
"strings_26": 0.012880241498351097,
"strings_27": 0.010139764286577702,
"strings_28": 0.010413811542093754,
"strings_29": 0.0027404767461121082,
"strings_30": 0.006029048934578896,
"strings_31": 0.004658810794353485,
"strings_32": 0.0021923815365880728,
"strings_33": 0.0027404767461121082,
"strings_34": 0.004110715351998806,
"strings_35": 0.005755001213401556,
"strings_36": 0.01589476503431797,
"strings_37": 0.011784050613641739,
"strings_38": 0.01397643145173788,
"strings_39": 0.010413811542093754,
"strings_40": 0.016168814152479172,
"strings_41": 0.015346670523285866,
"strings_42": 0.012332146055996418,
"strings_43": 0.013428336940705776,
"strings_44": 0.01452452689409256,
"strings_45": 0.00986571703106165,
"strings_46": 0.016442861407995224,
"strings_47": 0.014798575080931187,
"strings_48": 0.012058097869157791,
"strings_49": 0.01068785972893238,
"strings_50": 0.010413811542093754,
"strings_51": 0.015620717778801918,
"strings_52": 0.010139764286577702,
"strings_53": 0.013428336940705776,
"strings_54": 0.015072622336447239,
"strings_55": 0.014250479638576508,
"strings_56": 0.011510002426803112,
"strings_57": 0.012880241498351097,
"strings_58": 0.01397643145173788,
"strings_59": 0.012332146055996418,
"strings_60": 0.01068785972893238,
"strings_61": 0.00931762158870697,
"strings_62": 0.00986571703106165,
"strings_63": 0.005206905771046877,
"strings_64": 0.003014524467289448,
"strings_65": 0.003014524467289448,
"strings_66": 0.003562619909644127,
"strings_67": 0.0043847630731761456,
"strings_68": 0.01397643145173788,
"strings_69": 0.010413811542093754,
"strings_70": 0.017539052292704582,
"strings_71": 0.017539052292704582,
"strings_72": 0.02000548131763935,
"strings_73": 0.016442861407995224,
"strings_74": 0.014250479638576508,
"strings_75": 0.01452452689409256,
"strings_76": 0.01260619331151247,
"strings_77": 0.011510002426803112,
"strings_78": 0.013428336940705776,
"strings_79": 0.014798575080931187,
"strings_80": 0.016442861407995224,
"strings_81": 0.01452452689409256,
"strings_82": 0.017813099548220634,
"strings_83": 0.015072622336447239,
"strings_84": 0.00931762158870697,
"strings_85": 0.01452452689409256,
"strings_86": 0.014250479638576508,
"strings_87": 0.015620717778801918,
"strings_88": 0.014250479638576508,
"strings_89": 0.012332146055996418,
"strings_90": 0.013702384196221828,
"strings_91": 0.01397643145173788,
"strings_92": 0.00986571703106165,
"strings_93": 0.006303096655756235,
"strings_94": 0.004110715351998806,
"strings_95": 0.0027404767461121082,
"strings_96": 0.0027404767461121082,
"strings_97": 0.0024664292577654123,
"strings_98": 0.007399287540465593,
"strings_99": 6.4175848960876465,
"strings_100": 0,
"strings_101": 0,
"strings_102": 0,
"strings_103": 3,
"general_info_0": 43072,
"general_info_1": 110592,
"general_info_2": 0,
"general_info_3": 0,
"general_info_4": 5,
"general_info_5": 0,
"general_info_6": 1,
"general_info_7": 0,
"general_info_8": 0,
"general_info_9": 0,
"file_header_0": 1142459136,
"file_header_1": 0,
"file_header_2": 0,
"file_header_3": 0,
"file_header_4": 0,
"file_header_5": 0,
"file_header_6": 1,
"file_header_7": 0,
"file_header_8": 0,
"file_header_9": 0,
"file_header_10": 0,
"file_header_11": 0,
"file_header_12": 0,
"file_header_13": -1,
"file_header_14": 0,
"file_header_15": -1,
"file_header_16": -1,
"file_header_17": 0,
"file_header_18": 0,
"file_header_19": 0,
"file_header_20": 0,
"file_header_21": 0,
"file_header_22": 0,
"file_header_23": 0,
"file_header_24": 0,
"file_header_25": 0,
"file_header_26": 0,
"file_header_27": 0,
"file_header_28": 1,
"file_header_29": 0,
"file_header_30": 0,
"file_header_31": 0,
"file_header_32": 0,
"file_header_33": 0,
"file_header_34": 0,
"file_header_35": 0,
"file_header_36": 0,
"file_header_37": 0,
"file_header_38": 0,
"file_header_39": 0,
"file_header_40": 0,
"file_header_41": 0,
"file_header_42": -1,
"file_header_43": 0,
"file_header_44": 0,
"file_header_45": 0,
"file_header_46": 0,
"file_header_47": 0,
"file_header_48": 0,
"file_header_49": 0,
"file_header_50": 0,
"file_header_51": 0,
"file_header_52": 0,
"file_header_53": 2,
"file_header_54": 48,
"file_header_55": 4,
"file_header_56": 0,
"file_header_57": 4,
"file_header_58": 0,
"file_header_59": 32768,
"file_header_60": 4096,
"file_header_61": 4096,
"sections_0": 3,
"sections_1": 1,
"sections_2": 0,
"sections_3": 1,
"sections_4": 3,
"sections_5": 0,
"sections_6": 0,
"sections_7": 0,
"sections_8": 0,
"sections_9": 0,
"sections_10": 0,
"sections_11": 0,
"sections_12": 0,
"sections_13": 0,
"sections_14": 0,
"sections_15": 0,
"sections_16": 0,
"sections_17": 0,
"sections_18": 0,
"sections_19": 0,
"sections_20": 0,
"sections_21": 0,
"sections_22": 0,
"sections_23": 0,
"sections_24": 0,
"sections_25": 0,
"sections_26": 0,
"sections_27": 0,
"sections_28": 0,
"sections_29": 0,
"sections_30": 0,
"sections_31": 0,
"sections_32": 0,
"sections_33": 0,
"sections_34": 0,
"sections_35": 0,
"sections_36": 0,
"sections_37": 0,
"sections_38": 0,
"sections_39": 0,
"sections_40": 0,
"sections_41": 0,
"sections_42": 0,
"sections_43": 0,
"sections_44": 0,
"sections_45": 0,
"sections_46": 0,
"sections_47": 0,
"sections_48": 0,
"sections_49": 0,
"sections_50": 0,
"sections_51": 0,
"sections_52": -42048,
"sections_53": 0,
"sections_54": 0,
"sections_55": 0,
"sections_56": 0,
"sections_57": 0,
"sections_58": 0,
"sections_59": 0,
"sections_60": 0,
"sections_61": 0,
"sections_62": 0,
"sections_63": 0,
"sections_64": 0,
"sections_65": 0,
"sections_66": 0,
"sections_67": 0,
"sections_68": 0,
"sections_69": 0,
"sections_70": 0,
"sections_71": 0,
"sections_72": 0,
"sections_73": 0,
"sections_74": 0,
"sections_75": 0,
"sections_76": 0,
"sections_77": 0,
"sections_78": 0,
"sections_79": 0,
"sections_80": 0,
"sections_81": 0,
"sections_82": 0,
"sections_83": 0,
"sections_84": 0,
"sections_85": 0,
"sections_86": 0,
"sections_87": 0,
"sections_88": 0,
"sections_89": 0,
"sections_90": 0,
"sections_91": 0,
"sections_92": 0,
"sections_93": 0,
"sections_94": 0,
"sections_95": 0,
"sections_96": 0,
"sections_97": 0,
"sections_98": 0,
"sections_99": 0,
"sections_100": 0,
"sections_101": 0,
"sections_102": -11.691457748413086,
"sections_103": 0,
"sections_104": 0,
"sections_105": 0,
"sections_106": 0,
"sections_107": 0,
"sections_108": 0,
"sections_109": 0,
"sections_110": 0,
"sections_111": 0,
"sections_112": 0,
"sections_113": 0,
"sections_114": 0,
"sections_115": 0,
"sections_116": 0,
"sections_117": 0,
"sections_118": 0,
"sections_119": 0,
"sections_120": 0,
"sections_121": 0,
"sections_122": 0,
"sections_123": 0,
"sections_124": 0,
"sections_125": 0,
"sections_126": 0,
"sections_127": 0,
"sections_128": 0,
"sections_129": 0,
"sections_130": 0,
"sections_131": 0,
"sections_132": 0,
"sections_133": 0,
"sections_134": 0,
"sections_135": 0,
"sections_136": 0,
"sections_137": 0,
"sections_138": 0,
"sections_139": 0,
"sections_140": 0,
"sections_141": 0,
"sections_142": 0,
"sections_143": 0,
"sections_144": 0,
"sections_145": 0,
"sections_146": 0,
"sections_147": 0,
"sections_148": 0,
"sections_149": 0,
"sections_150": 0,
"sections_151": 0,
"sections_152": -102464,
"sections_153": 0,
"sections_154": 0,
"sections_155": 0,
"sections_156": 0,
"sections_157": 2,
"sections_158": 0,
"sections_159": 0,
"sections_160": 0,
"sections_161": 0,
"sections_162": 0,
"sections_163": 0,
"sections_164": 2,
"sections_165": 0,
"sections_166": 0,
"sections_167": 2,
"sections_168": 0,
"sections_169": 0,
"sections_170": 0,
"sections_171": 0,
"sections_172": 0,
"sections_173": 0,
"sections_174": 0,
"sections_175": 0,
"sections_176": 0,
"sections_177": 0,
"sections_178": 0,
"sections_179": 0,
"sections_180": 0,
"sections_181": 2,
"sections_182": 0,
"sections_183": 0,
"sections_184": 0,
"sections_185": 0,
"sections_186": 0,
"sections_187": 0,
"sections_188": 0,
"sections_189": 0,
"sections_190": 0,
"sections_191": 0,
"sections_192": 0,
"sections_193": 0,
"sections_194": 0,
"sections_195": 0,
"sections_196": 0,
"sections_197": 0,
"sections_198": 0,
"sections_199": 0,
"sections_200": 0,
"sections_201": 0,
"sections_202": 0,
"sections_203": 0,
"sections_204": 0,
"sections_205": 2,
"sections_206": 0,
"sections_207": 0,
"sections_208": 0,
"sections_209": 0,
"sections_210": 0,
"sections_211": 0,
"sections_212": 0,
"sections_213": 0,
"sections_214": 0,
"sections_215": 0,
"sections_216": 0,
"sections_217": 0,
"sections_218": -1,
"sections_219": 0,
"sections_220": 0,
"sections_221": 0,
"sections_222": 0,
"sections_223": 0,
"sections_224": 0,
"sections_225": 0,
"sections_226": 0,
"sections_227": 0,
"sections_228": 3,
"sections_229": 0,
"sections_230": 0,
"sections_231": 0,
"sections_232": 0,
"sections_233": 0,
"sections_234": 0,
"sections_235": 0,
"sections_236": 0,
"sections_237": 0,
"sections_238": 0,
"sections_239": 0,
"sections_240": 0,
"sections_241": 0,
"sections_242": 3,
"sections_243": 0,
"sections_244": 0,
"sections_245": 0,
"sections_246": 0,
"sections_247": 0,
"sections_248": 0,
"sections_249": 0,
"sections_250": 0,
"sections_251": 0,
"sections_252": -1,
"sections_253": 0,
"sections_254": 0
}
}
]
url = "YOUR_ELASTICSEARCH_ENDPOINT_URL"
api_key = "YOUR_API_KEY"
api_id = "YOUR_API_ID"
# Initialize Elasticsearch client
es = Elasticsearch(
url,
api_key=(api_id, api_key),
use_ssl=True,
ca_certs=certifi.where()
)
# Create index
es.indices.create(index="ember_ml")
# Bulk ingest documents into Elasticsearch
try:
for success, info in helpers.streaming_bulk(es, documents, chunk_size=2500):
if not success:
print("A document failed:", info)
except elasticsearch.ElasticsearchException:
print("Failed to insert")

Note that the feature vectors need to be flattened, i.e., each feature needs to be a separate field of a supported data type (numeric, boolean, text, keyword, or IP) in each document, since data frame analytics does not support arrays with more than one element. Also notice that the “appeared” (first seen) field in the EMBER data set has been altered to match an Elasticsearch-compatible date format for the purpose of making time series visualizations later. 

To make sure that all our data is ingested in the right format into Elasticsearch, we run the following queries in the Dev Tools Console (Management -> Dev Tools):

To get count of the number of documents:

GET ember_ml/_count

To search for documents in the index and make sure they’re in the right format:

GET ember_ml/_search

Once we have verified that the data in Elasticsearch looks as expected, we are now ready to create our analytics jobs. However, before creating the jobs, we need to define an index pattern for the job. Index patterns tell Kibana (and consequently the job) which Elasticsearch indices contain the data that you want to work with. We create the index pattern ember_* to match our index ember_ml.

Model training

Once the index pattern is created, we’ll create two analytics jobs with the two subsets of features, as mentioned above. This can be done via the Machine Learning app in Kibana. We will configure our job as follows:

  • Job type: We select “classification” to predict whether a given binary is malicious or benign. The underlying classification model in Elastic machine learning is a type of boosting called boosted tree regression, which combines multiple weak models into a composite model. It uses decision trees to learn to predict the probability that a data point belongs to a certain class.
  • Dependent variable: “label” in our case, 1 for malicious and 0 for benign.
  • Fields to include: We select the fields we would like to include in the training. 
  • Training percentage: It is recommended that you use an iterative approach to training, especially if you’re working with a large data set (i.e., start by creating a training job with a smaller training percentage, evaluate the performance, and decide if it is necessary to increase the training percentage). We’ll start with a training percentage of 10% since we are working with a sizable data set (300K documents).
  • Additional information options: We’ll leave the defaults as they are, but you can choose to set hyperparameters for the training job at this stage.
  • Job details: We’ll assign an appropriate job ID and destination index for the job.
  • Create index pattern: We’ll disable this, since we will be creating a single index pattern to match the destination indices for both our training jobs in order to visualize the results together. 

We’ll create two analytics jobs following the process described above, one with only the bytes histogram as features (destination index: bytes_preds) and one with everything but byte histogram as features (destination index: main_preds). The analytics job determines the best encodings for each feature, best performing features, and optimal hyperparameters for the model. The job progress can also be tracked in the Machine Learning app:

Tracking job progress in the Machine Learning app Tracking job progress in the Machine Learning app

Tracking job progress in the Machine Learning app

Model evaluation

Once the jobs have completed, we can view the prediction results by clicking the View button next to each of the completed jobs. On clicking View, we see a dataframe-style view of the contents of the destination index and the confusion matrix for the model. Each row in the dataframe (shown below) indicates whether a sample has been used in training, the model prediction, label, and class probability and score:

Dataframe view of the main_preds destination index

Dataframe view of the main_preds destination index

We use the confusion matrices to evaluate and compare the performance of both the models. Each row in the confusion matrix here represents instances in the actual class and each column represents instances in the predicted class, thus giving a measure of true positives, false positives (top row), false negatives, and true negatives (bottom row).

Confusion matrix for model with general, file header and section information, and strings as features

Confusion matrix for model with general, file header and section information, and strings as features

Confusion matrix for model with byte histogram as features

Confusion matrix for model with byte histogram as features

We see that both models have pretty good accuracy (at least for the purposes of a demo!) so we decide against another round of training or hyperparameter tuning. In the next section, we will see how to visually compare the two models in Kibana and decide which model to deploy.

Model monitoring

Once we have the predictions for both the models in their respective destination indices, we will create an index pattern (*_preds, in this case) to match the two in order to create model monitoring dashboards in Kibana. For this example, the monitoring dashboard serves two purposes:

  • Compare the performance of the byte histogram-only model with the other model; we use TSVB visualizations for this.
  • Track different metrics for the better performing model; we use vertical bar visualizations to visualize prediction probabilities and benign vs malicious sample counts as well as TSVB to track false positive rate and false negative rate.

False negative rate of the two trained models over time False positive rate of the two trained models over time

False negative rate and false positive rate of the two trained models over time

By observing the false negative rate and false positive rate of the two models across a significant time range, and looking at the confusion matrices shown in the previous section, we conclude that the model trained on general, file header and section information, and strings is the better performing model. We then plot various metrics that we would like to track for this model, assuming this is the one we want to deploy and monitor post-deployment.

Dashboard of various model performance metrics created in Kibana

In real-world use cases, such monitoring dashboards can be used to compare candidate models for production and once a model has been deployed, identify indicators of model decay (e.g., false positive bursts) in production environments and trigger relevant responses (e.g., new model training). In the next section, we’ll see how to deploy our chosen model for use in a machine learning production pipeline.

Deploying our supervised model to enrich data at ingest time

In addition to model training and evaluation, the Elastic Stack also provides a way for a user to use trained models in ingest pipelines. This in turn opens up an avenue to use machine learning models to enrich your data at ingest time. In this section, we will take a look at how you can do exactly this with the malware classification model we trained above!

Suppose that in this case we have an incoming stream of data extracted from binaries that we wish to classify as either malicious or benign. We will ingest this data into Elasticsearch through an ingest pipeline and reference our trained malware classification model in an inference processor. 

First, let’s create our inference processor and ingest pipeline. The most important part of the inference processor is the trained model and its model_id, which we can look up with the following REST API call in the Kibana console:

GET _ml/inference

This will return a list of trained models in our cluster and for each model, display characteristics such as the model_id (which we should make a note of for inference), the fields used for training the model, when the model was trained, and so forth.

Sample output from a call to retrieve information about trained models shows the model_id, which is required for configuring inference processors

Sample output from a call to retrieve information about trained models shows the model_id, which is required for configuring inference processors

If you have a large number of trained models in your cluster, it might be helpful to run the API call above with a wildcard query based on the name of the data frame analytics job that was used to train the model. In this case, the models we care about were trained with jobs called ember_*, so we can run 

GET _ml/inference/ember_*

to quickly narrow down our models to the desired ones. 

Once we have made note of the model_id, we can create our ingest pipeline configuration. The full configuration is shown below. Make a note of the configuration block titled inference. It references the model we wish to use to enrich our documents. It also specifies a target_field (which in this case we’ve set to is_malware, but can of course be set according to preference), which will be used to prefix the ML fields that will be added when the document is processed by the inference processor. 

PUT _ingest/pipeline/malware-classification
{
"description": "Classifies incoming binaries as malicious or benign",
"processors": [
{
"inference": {
"model_id": "ember_main-1598557689011",
"target_field": "is_malware",
"inference_config": {
"classification": {
"num_top_classes": 2
}
}
}
}
]
}

Now suppose we are ingesting documents with features of binaries and we wish to enrich this data with predictions of the maliciousness of each binary. An abridged sample document is shown below:

{ 
"appeared" : "2020-04-01 00:00:00",
"byte_0" : 0.1622137576341629,
"byte_1" : 0.007498478516936302,
"byte_2" : 0.003992937505245209,
"byte_3" : 0.00546838915720582,
"byte_4" : 0.007421959958970547,
...
"byte_253" : 0.0019106657709926367,
"byte_254" : 0.003551538335159421,
"byte_255" : 0.1782810389995575,
"strings_0" : 3312.0,
"strings_1" : 24.97675132751465,
"strings_2" : 82723.0,
"strings_3" : 0.07208394259214401,
"strings_4" : 8.099319529719651E-4,
"strings_5" : 0.005427753087133169,
...
"strings_100" : 0.0,
"strings_101" : 39.0,
"strings_102" : 0.0,
"strings_103" : 9.0,
"general_info_0" : 1130496.0,
"general_info_1" : 1134592.0,
"general_info_2" : 1.0,
"general_info_3" : 0.0,
"general_info_4" : 247.0,
"general_info_5" : 1.0,
"general_info_6" : 1.0,
"general_info_7" : 1.0,
"general_info_8" : 1.0,
"general_info_9" : 0.0,
"file_header_0" : 1.511340288E9,
"file_header_1" : 0.0,
"file_header_2" : 0.0,
"file_header_3" : 0.0,
"file_header_4" : 0.0,
"file_header_5" : 0.0,
"file_header_6" : 1.0,
"file_header_7" : 0.0,
"file_header_8" : 0.0,
"file_header_9" : 0.0,
...
"file_header_59" : 262144.0,
"file_header_60" : 1024.0,
"file_header_61" : 4096.0,
"sections_0" : 5.0,
"sections_1" : 0.0,
"sections_2" : 0.0,
"sections_3" : 1.0,
"sections_4" : 1.0,
"sections_5" : 0.0,
...
"sections_253" : 0.0,
"sections_254" : 0.0
]

We can ingest this document by using one of the Index APIs and pipe through the malware-classification pipeline we created above. An example API call that ingests this document into a destination index called main_preds is shown below. To save space, the document has been abridged. 

POST main_preds/_doc?pipeline=malware-classification 
{
"appeared" : "2020-04-01 00:00:00",
"byte_0" : 0.1622137576341629,
"byte_1" : 0.007498478516936302,
"byte_2" : 0.003992937505245209,
"byte_3" : 0.00546838915720582,
"byte_4" : 0.007421959958970547,
"byte_5" : 0.0025378242135047913,
"byte_6" : 0.002135345945134759,
"byte_7" : 0.001892974367365241,
"byte_8" : 0.007126075681298971,
"byte_9" : 0.001768250367604196,
"byte_10" : 0.0055223405789583921,
"byte_11" : 0.001283507444895804,
"byte_12" : 0.008042919423431158,
"byte_13" : 0.001533839968033135,
"byte_14" : 0.0010570581071078777,
"byte_15" : 0.006860705558210611,
...

As a result, in our destination index main_preds, we now have a new document that has been enriched with the predictions from our trained machine learning model. If we view the document (for example, using the Discover tab), we will see that per our configuration, the inference processor has added the predictions of the trained machine learning model to the document. In this case, our document (which represents an unknown binary we want to classify as malicious or benign) has been assigned class 1, which indicates that our model predicts this binary to be malicious. 

A snippet from the ingested document shows enrichment from our trained machine learning model

A snippet from the ingested document shows enrichment from our trained machine learning model

As new documents with predictions are added to the destination index, these will automatically be picked up by the Kibana dashboards, thus providing insight into how the trained model is performing on new samples over time.

Conclusion

In production environments, the buck does not (or should not!) stop with model deployment. The pipeline needs to have a way to effectively evaluate models before they reach the customer environment and monitor them closely once they are deployed. This helps data science teams foresee issues in the wild and take necessary action when there are indicators of model decay.

In this blog post, we explored why the Elastic Stack is a great platform for managing such an end-to-end machine learning pipeline, given that it has great storage capabilities, model training, built-in tuning, and an exhaustive suite of visualization tools in Kibana.