Blog

We collect the key news feed from free RSS services,  the news is updated every 3 hours, 24/7.

Artificial Intelligence

Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business

AWS Machine Learning Blog Amazon Q Business is a fully managed, generative artificial intelligence (AI)-powered assistant that helps enterprises unlock the value of their data and knowledge. With Amazon Q, you can quickly find answers to questions, generate summaries and content, and complete tasks by using the information and expertise stored across your company’s various data sources and enterprise systems. At the core of this capability are native data source connectors that seamlessly integrate and index content from multiple repositories into a unified index. This enables the Amazon Q large language model (LLM) to provide accurate, well-written answers by drawing from the consolidated data and information. The data source connectors act as a bridge, synchronizing content from disparate systems like Salesforce, Jira, and SharePoint into a centralized index that powers the natural language understanding and generative abilities of Amazon Q. Customers appreciate that Amazon Q Business securely connects to over 40 data sources. While using their data source, they want better visibility into the document processing lifecycle during data source sync jobs. They want to know the status of each document they attempted to crawl and index, as well as the ability to troubleshoot why certain documents were not returned with the expected answers. Additionally, they want access to metadata, timestamps, and access control lists (ACLs) for the indexed documents. We are pleased to announce a new feature now available in Amazon Q Business that significantly improves visibility into data source sync operations. The latest release introduces a comprehensive document-level report incorporated into the sync history, providing administrators with granular indexing status, metadata, and ACL details for every document processed during a data source sync job. This enhancement to sync job observability enables administrators to quickly investigate and resolve ingestion or access issues encountered while setting up an Amazon Q...
Read More
Artificial Intelligence

Derive generative AI-powered insights from ServiceNow with Amazon Q Business

AWS Machine Learning Blog Effective customer support, project management, and knowledge management are critical aspects of providing efficient customer relationship management. ServiceNow is a platform for incident tracking, knowledge management, and project management functions for software projects and has become an indispensable part of many organizations’ workflows to ensure success of the customer and the product. However, extracting valuable insights from the vast amount of data stored in ServiceNow often requires manual effort and building specialized tooling. Users such as support engineers, project managers, and product managers need to be able to ask questions about an incident or a customer, or get answers from knowledge articles in order to provide excellent customer support. Organizations use ServiceNow to manage workflows, such as IT services, ticketing systems, configuration management, and infrastructure changes across IT systems. Generative artificial intelligence (AI) provides the ability to take relevant information from a data source such as ServiceNow and provide well-constructed answers back to the user. Building a generative AI-based conversational application integrated with relevant data sources requires an enterprise to invest time, money, and people. First, you need to build connectors to the data sources. Next, you need to index this data to make it available for a Retrieval Augmented Generation (RAG) approach, where relevant passages are delivered with high accuracy to a large language model (LLM). To do this, you need to select an index that provides the capabilities to index the content for semantic and vector search, build the infrastructure to retrieve and rank the answers, and build a feature-rich web application. Additionally, you need to hire and staff a large team to build, maintain, and manage such a system. Amazon Q Business is a fully managed generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on...
Read More
Hypervigilance Around Other People’s Emotions and Needs
Psychology

Hypervigilance Around Other People’s Emotions and Needs

Psychology Today: The Latest Those with a history of people-pleasing behavior often have shaky boundaries where they ignore or downplay their own needs in order to put others’ needs ahead of their own. Go to Source 14/08/2024 - 07:34 /Annie Tanasugarn Ph.D., CCTSA Twitter: @hoffeldtcom
Read More
Guiding Your Teen Through the First Year of High School
Psychology

Guiding Your Teen Through the First Year of High School

Psychology Today: The Latest Are you ready for the rollercoaster that is your teen's first year of high school? Learn the tools you'll need to make this journey smoother for everyone involved. Go to Source 14/08/2024 - 07:34 /Hannah Leib LCSW Twitter: @hoffeldtcom
Read More
6 Practices for Our Rootless Lives
Psychology

6 Practices for Our Rootless Lives

Psychology Today: The Latest Many of us feel rootless and disconnected. Imaginative and playful spiritual-ish practices can help change that. Go to Source 14/08/2024 - 07:34 /Keith S. Cox Ph.D. Twitter: @hoffeldtcom
Read More
Can Financial Psychology Help Me?
Psychology

Can Financial Psychology Help Me?

Psychology Today: The Latest Many of us are currently facing financial stress. What can we do about it? Can therapy help? Go to Source 14/08/2024 - 07:34 /Courtney Crisp Psy.D. Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Intelligent healthcare forms analysis with Amazon Bedrock

AWS Machine Learning Blog Generative artificial intelligence (AI) provides an opportunity for improvements in healthcare by combining and analyzing structured and unstructured data across previously disconnected silos. Generative AI can help raise the bar on efficiency and effectiveness across the full scope of healthcare delivery. The healthcare industry generates and collects a significant amount of unstructured textual data, including clinical documentation such as patient information, medical history, and test results, as well as non-clinical documentation like administrative records. This unstructured data can impact the efficiency and productivity of clinical services, because it’s often found in various paper-based forms that can be difficult to manage and process. Streamlining the handling of this information is crucial for healthcare providers to improve patient care and optimize their operations. Handling large volumes of data, extracting unstructured data from multiple paper forms or images, and comparing it with the standard or reference forms can be a long and arduous process, prone to errors and inefficiencies. However, advancements in generative AI solutions have introduced automated approaches that offer a more efficient and reliable solution for comparing multiple documents. Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. Amazon Bedrock offers a serverless experience, so you can get started quickly, privately customize FMs with your own data, and quickly integrate and deploy them into your applications using the AWS tools without having to manage the infrastructure. In this post, we explore using the Anthropic Claude 3 on Amazon Bedrock large language model (LLM). Amazon Bedrock provides access to several LLMs, such as Anthropic Claude 3, which can be used to generate semi-structured...
Read More
The Mental Health Benefits of Sports for All Teens
Psychology

The Mental Health Benefits of Sports for All Teens

Psychology Today: The Latest Engaging in too many sports and activities can lead to anxiety and depression in teens while choosing the ideal number can boost self-worth and reduce depression. Go to Source 14/08/2024 - 07:34 /Kimberly Key Ph.D. Twitter: @hoffeldtcom
Read More
Psychology

Office for Disparities Research and Workforce Diversity Webinar Series: Understanding Stigma and Discrimination as Drivers of Mental Health Disparities for Diverse, Rural, LGBTQ+ Communities

NIMH News Feed This webinar will present the goals and procedures of the Rural Engagement and Approaches For LGBTQ+ Mental Health (REALM) study, which is developing a longitudinal cohort of diverse LGBTQ+ adults residing in rural and small metropolitan communities across the United States. Go to Source 14/08/2024 - 07:34 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Harness the power of AI and ML using Splunk and Amazon SageMaker Canvas

AWS Machine Learning Blog As the scale and complexity of data handled by organizations increase, traditional rules-based approaches to analyzing the data alone are no longer viable. Instead, organizations are increasingly looking to take advantage of transformative technologies like machine learning (ML) and artificial intelligence (AI) to deliver innovative products, improve outcomes, and gain operational efficiencies at scale. Furthermore, the democratization of AI and ML through AWS and AWS Partner solutions is accelerating its adoption across all industries. For example, a health-tech company may be looking to improve patient care by predicting the probability that an elderly patient may become hospitalized by analyzing both clinical and non-clinical data. This will allow them to intervene early, personalize the delivery of care, and make the most efficient use of existing resources, such as hospital bed capacity and nursing staff. AWS offers the broadest and deepest set of AI and ML services and supporting infrastructure, such as Amazon SageMaker and Amazon Bedrock, to help you at every stage of your AI/ML adoption journey, including adoption of generative AI. Splunk, an AWS Partner, offers a unified security and observability platform built for speed and scale. As the diversity and volume of data increases, it is vital to understand how they can be harnessed at scale by using complementary capabilities of the two platforms. For organizations looking beyond the use of out-of-the-box Splunk AI/ML features, this post explores how Amazon SageMaker Canvas, a no-code ML development service, can be used in conjunction with data collected in Splunk to drive actionable insights. We also demonstrate how to use the generative AI capabilities of SageMaker Canvas to speed up your data exploration and help you build better ML models. Use case overview In this example, a health-tech company offering remote patient monitoring is collecting operational data from...
Read More
Artificial Intelligence

How Deltek uses Amazon Bedrock for question and answering on government solicitation documents

AWS Machine Learning Blog This post is co-written by Kevin Plexico and Shakun Vohra from Deltek. Question and answering (Q&A) using documents is a commonly used application in various use cases like customer support chatbots, legal research assistants, and healthcare advisors. Retrieval Augmented Generation (RAG) has emerged as a leading method for using the power of large language models (LLMs) to interact with documents in natural language. This post provides an overview of a custom solution developed by the AWS Generative AI Innovation Center (GenAIIC) for Deltek, a globally recognized standard for project-based businesses in both government contracting and professional services. Deltek serves over 30,000 clients with industry-specific software and information solutions. In this collaboration, the AWS GenAIIC team created a RAG-based solution for Deltek to enable Q&A on single and multiple government solicitation documents. The solution uses AWS services including Amazon Textract, Amazon OpenSearch Service, and Amazon Bedrock. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) and LLMs from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Deltek is continuously working on enhancing this solution to better align it with their specific requirements, such as supporting file formats beyond PDF and implementing more cost-effective approaches for their data ingestion pipeline. What is RAG? RAG is a process that optimizes the output of LLMs by allowing them to reference authoritative knowledge bases outside of their training data sources before generating a response. This approach addresses some of the challenges associated with LLMs, such as presenting false, outdated, or generic information, or creating inaccurate responses due to terminology confusion. RAG enables LLMs to generate...
Read More
Psychology

Women who spend time on TikTok feel less satisfied with their bodies

PsycPORT™: Psychology Newswire Study says participants who were exposed to pro-anorexia content felt worse about themselves. Go to Source 14/08/2024 - 07:33 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Cisco achieves 50% latency improvement using Amazon SageMaker Inference faster autoscaling feature

AWS Machine Learning Blog This post is co-authored with Travis Mehlinger and Karthik Raghunathan from Cisco. Webex by Cisco is a leading provider of cloud-based collaboration solutions which includes video meetings, calling, messaging, events, polling, asynchronous video and customer experience solutions like contact center and purpose-built collaboration devices. Webex’s focus on delivering inclusive collaboration experiences fuels our innovation, which leverages AI and Machine Learning, to remove the barriers of geography, language, personality, and familiarity with technology. Its solutions are underpinned with security and privacy by design. Webex works with the world’s leading business and productivity apps – including AWS. Cisco’s Webex AI (WxAI) team plays a crucial role in enhancing these products with AI-driven features and functionalities, leveraging LLMs to improve user productivity and experiences. In the past year, the team has increasingly focused on building artificial intelligence (AI) capabilities powered by large language models (LLMs) to improve productivity and experience for users. Notably, the team’s work extends to Webex Contact Center, a cloud-based omni-channel contact center solution that empowers organizations to deliver exceptional customer experiences. By integrating LLMs, WxAI team enables advanced capabilities such as intelligent virtual assistants, natural language processing, and sentiment analysis, allowing Webex Contact Center to provide more personalized and efficient customer support. However, as these LLM models grew to contain hundreds of gigabytes of data, WxAI team faced challenges in efficiently allocating resources and starting applications with the embedded models. To optimize its AI/ML infrastructure, Cisco migrated its LLMs to Amazon SageMaker Inference, improving speed, scalability, and price-performance. This blog post highlights how Cisco implemented faster autoscaling release reference. For more details on Cisco’s Use Cases, Solution & Benefits see How Cisco accelerated the use of generative AI with Amazon SageMaker Inference. In this post, we will discuss the following: Overview of Cisco’s use-case and architecture Introduce new faster...
Read More
Psychology

Combat brain fatigue

PsycPORT™: Psychology Newswire A new study finds that people see a downside to "putting on their thinking cap". Go to Source 14/08/2024 - 07:33 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

How Cisco accelerated the use of generative AI with Amazon SageMaker Inference

AWS Machine Learning Blog This post is co-authored with Travis Mehlinger and Karthik Raghunathan from Cisco. Webex by Cisco is a leading provider of cloud-based collaboration solutions, including video meetings, calling, messaging, events, polling, asynchronous video, and customer experience solutions like contact center and purpose-built collaboration devices. Webex’s focus on delivering inclusive collaboration experiences fuels their innovation, which uses artificial intelligence (AI) and machine learning (ML), to remove the barriers of geography, language, personality, and familiarity with technology. Its solutions are underpinned with security and privacy by design. Webex works with the world’s leading business and productivity apps—including AWS. Cisco’s Webex AI (WxAI) team plays a crucial role in enhancing these products with AI-driven features and functionalities, using large language models (LLMs) to improve user productivity and experiences. In the past year, the team has increasingly focused on building AI capabilities powered by LLMs to improve productivity and experience for users. Notably, the team’s work extends to Webex Contact Center, a cloud-based omni-channel contact center solution that empowers organizations to deliver exceptional customer experiences. By integrating LLMs, the WxAI team enables advanced capabilities such as intelligent virtual assistants, natural language processing (NLP), and sentiment analysis, allowing Webex Contact Center to provide more personalized and efficient customer support. However, as these LLM models grew to contain hundreds of gigabytes of data, the WxAI team faced challenges in efficiently allocating resources and starting applications with the embedded models. To optimize its AI/ML infrastructure, Cisco migrated its LLMs to Amazon SageMaker Inference, improving speed, scalability, and price-performance. This post highlights how Cisco implemented new functionalities and migrated existing workloads to Amazon SageMaker inference components for their industry-specific contact center use cases. By integrating generative AI, they can now analyze call transcripts to better understand customer pain points and improve agent productivity. Cisco has...
Read More
Psychology

People experiencing colorism say health system fails them

PsycPORT™: Psychology Newswire Clinicians from various ethnic groups have recently begun to draw a direct line between colorism and poor health. Go to Source 14/08/2024 - 07:33 / Twitter: @hoffeldtcom
Read More
Psychology

People are surprisingly reluctant to reach out to old friends

PsycPORT™: Psychology Newswire Researchers found that the biggest barrier to reaching out was fear the friend wouldn't want to hear from them. Go to Source 14/08/2024 - 07:32 / Twitter: @hoffeldtcom
Read More
Business News

Keppel unit enters JV to manage cooling sector in Thailand

The Straits Times Business News Since launching energy-as-a-service business in late 2021, Keppel has signed several deals. Go to Source 14/08/2024 - 07:32 / Twitter: @hoffeldtcom
Read More
Long Covid health issues persist in those hospitalised early in pandemic, study finds
Covid-19

Long Covid health issues persist in those hospitalised early in pandemic, study finds

Coronavirus | The Guardian Substantial proportion have cognitive and mental health problems years after infection, with some symptoms worseningHealth problems and brain fog can persist for years in people hospitalised by Covid early in the pandemic, with some patients developing more severe and even new symptoms after 12 months, researchers say.They found that while many people with long Covid improved over time, a substantial proportion still had cognitive problems two to three years later and saw symptoms of depression, anxiety and fatigue worsen rather than subside. Continue reading... Go to Source 14/08/2024 - 07:32 /Ian Sample Science editor Twitter: @hoffeldtcom
Read More
Why Your Therapist Won’t Just Tell You What to Do
Psychology

Why Your Therapist Won’t Just Tell You What to Do

Psychology Today: The Latest Therapists and coaches purposefully avoid giving advice as no one can fully know your lived experience. Therapy is about your identified needs, not your therapist's opinions. Go to Source 26/07/2024 - 11:37 /Julie Radico Psy.D. ABPP Twitter: @hoffeldtcom
Read More
Lessons From the Astronauts
Psychology

Lessons From the Astronauts

Psychology Today: The Latest The Overview Effect, experienced by astronauts viewing Earth from space, can transform mental health by fostering a sense of interconnectedness and compassion. Go to Source 26/07/2024 - 11:37 /Sarah Abedi M.D. Twitter: @hoffeldtcom
Read More
A New Questionnaire Measures Gaslighting Victimization
Psychology

A New Questionnaire Measures Gaslighting Victimization

Psychology Today: The Latest Researchers have developed a new questionnaire to measure gaslighting in a relationship. Go to Source 26/07/2024 - 11:37 /Gwendolyn Seidman Ph.D. Twitter: @hoffeldtcom
Read More
Lying While Telling the Truth
Psychology

Lying While Telling the Truth

Psychology Today: The Latest Do you sometimes feel that people are misleading you with facts? Learn how to optimize your relationships by raising the standard for honesty and trust. Go to Source 26/07/2024 - 11:37 /Daniel S. Lobel Ph.D. Twitter: @hoffeldtcom
Read More
Crippling Realities for Today’s Kids: How Caring Adults Can Help
Psychology

Crippling Realities for Today’s Kids: How Caring Adults Can Help

Psychology Today: The Latest If we’re not careful, 21st-century culture can damage our kids. Consider these intentional action steps. Go to Source 26/07/2024 - 11:37 /Tim Elmore Twitter: @hoffeldtcom
Read More
Psychology

Office for Disparities Research and Workforce Diversity’s Disability, Equity, and Mental Health Research Webinar Series: Transforming Mental Health Disability Research Through Lived Experience Leadership and Co-Production

NIMH News Feed This webinar will introduce a range of approaches to meaningfully integrate individuals with lived experiences of psychiatric disabilities into mental health research. Go to Source 26/07/2024 - 11:37 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Amazon SageMaker inference launches faster auto scaling for generative AI models

AWS Machine Learning Blog Today, we are excited to announce a new capability in Amazon SageMaker inference that can help you reduce the time it takes for your generative artificial intelligence (AI) models to scale automatically. You can now use sub-minute metrics and significantly reduce overall scaling latency for generative AI models. With this enhancement, you can improve the responsiveness of your generative AI applications as demand fluctuates. The rise of foundation models (FMs) and large language models (LLMs) has brought new challenges to generative AI inference deployment. These advanced models often take seconds to process, while sometimes handling only a limited number of concurrent requests. This creates a critical need for rapid detection and auto scaling to maintain business continuity. Organizations implementing generative AI seek comprehensive solutions that address multiple concerns: reducing infrastructure costs, minimizing latency, and maximizing throughput to meet the demands of these sophisticated models. However, they prefer to focus on solving business problems rather than doing the undifferentiated heavy lifting to build complex inference platforms from the ground up. SageMaker provides industry-leading capabilities to address these inference challenges. It offers endpoints for generative AI inference that reduce FM deployment costs by 50% on average and latency by 20% on average by optimizing the use of accelerators. The SageMaker inference optimization toolkit, a fully managed model optimization feature in SageMaker, can deliver up to two times higher throughput while reducing costs by approximately 50% for generative AI performance on SageMaker. Besides optimization, SageMaker inference also provides streaming support for LLMs, enabling you to stream tokens in real time rather than waiting for the entire response. This allows for lower perceived latency and more responsive generative AI experiences, which are crucial for use cases like conversational AI assistants. Lastly, SageMaker inference provides the ability to deploy a single...
Read More
Artificial Intelligence

Find answers accurately and quickly using Amazon Q Business with the SharePoint Online connector

AWS Machine Learning Blog Amazon Q Business is a fully managed, generative artificial intelligence (AI)-powered assistant that helps enterprises unlock the value of their data and knowledge. With Amazon Q, you can quickly find answers to questions, generate summaries and content, and complete tasks by using the information and expertise stored across your company’s various data sources and enterprise systems. At the core of this capability are native data source connectors that seamlessly integrate and index content from multiple repositories into a unified index. This enables the Amazon Q large language model (LLM) to provide accurate, well-written answers by drawing from the consolidated data and information. The data source connectors act as a bridge, synchronizing content from disparate systems like Salesforce, Jira, and SharePoint into a centralized index that powers the natural language understanding and generative abilities of Amazon Q. To make this integration process as seamless as possible, Amazon Q Business offers multiple pre-built connectors to a wide range of data sources, including Atlassian Jira, Atlassian Confluence, Amazon Simple Storage Service (Amazon S3), Microsoft SharePoint, Salesforce, and many more. This allows you to create your generative AI solution with minimal configuration. For a full list of Amazon Q supported data source connectors, see Supported connectors. One of the key integrations for Amazon Q is with Microsoft SharePoint Online. SharePoint is a widely used collaborative platform that allows organizations to manage and share content, knowledge, and applications to improve productivity and decision-making. By integrating Amazon Q with SharePoint, businesses can empower their employees to access information and insights from SharePoint more efficiently and effectively. With the Amazon Q and SharePoint Online integration, business users can do the following: Get instant answers – Users can ask natural language questions and Amazon Q will provide accurate, up-to-date answers by searching and synthesizing...
Read More
Psychology

Episode 3: Jane the Brain and the Upset Reset

NIMH News Feed Hello kids, meet Jane the Brain! In this fun and colorful video series from the National Institute of Mental Health (NIMH), Jane, our super-smart and friendly animated character, helps kids understand big feelings like stress, frustration, and sadness. Join Jane as she explores ways to handle these emotions with relatable situations and helpful tips and coping skills. Go to Source 26/07/2024 - 11:36 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Evaluate conversational AI agents with Amazon Bedrock

AWS Machine Learning Blog As conversational artificial intelligence (AI) agents gain traction across industries, providing reliability and consistency is crucial for delivering seamless and trustworthy user experiences. However, the dynamic and conversational nature of these interactions makes traditional testing and evaluation methods challenging. Conversational AI agents also encompass multiple layers, from Retrieval Augmented Generation (RAG) to function-calling mechanisms that interact with external knowledge sources and tools. Although existing large language model (LLM) benchmarks like MT-bench evaluate model capabilities, they lack the ability to validate the application layers. The following are some common pain points in developing conversational AI agents: Testing an agent is often tedious and repetitive, requiring a human in the loop to validate the semantics meaning of the responses from the agent, as shown in the following figure. Setting up proper test cases and automating the evaluation process can be difficult due to the conversational and dynamic nature of agent interactions. Debugging and tracing how conversational AI agents route to the appropriate action or retrieve the desired results can be complex, especially when integrating with external knowledge sources and tools. Agent Evaluation, an open source solution using LLMs on Amazon Bedrock, addresses this gap by enabling comprehensive evaluation and validation of conversational AI agents at scale. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Agent Evaluation provides the following: Built-in support for popular services, including Agents for Amazon Bedrock, Knowledge Bases for Amazon Bedrock, Amazon Q Business, and Amazon SageMaker endpoints Orchestration of concurrent, multi-turn conversations with your agent while evaluating its responses Configurable...
Read More
Psychology

Episode 2: Jane the Brain and the Frustration Sensation

NIMH News Feed Hello kids, meet Jane the Brain! In this fun and colorful video series from the National Institute of Mental Health (NIMH), Jane, our super-smart and friendly animated character, helps kids understand big feelings like stress, frustration, and sadness. Join Jane as she explores ways to handle these emotions with relatable situations and helpful tips and coping skills. Go to Source 26/07/2024 - 11:36 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Node problem detection and recovery for AWS Neuron nodes within Amazon EKS clusters

AWS Machine Learning Blog Implementing hardware resiliency in your training infrastructure is crucial to mitigating risks and enabling uninterrupted model training. By implementing features such as proactive health monitoring and automated recovery mechanisms, organizations can create a fault-tolerant environment capable of handling hardware failures or other issues without compromising the integrity of the training process. In the post, we introduce the AWS Neuron node problem detector and recovery DaemonSet for AWS Trainium and AWS Inferentia on Amazon Elastic Kubernetes Service (Amazon EKS). This component can quickly detect rare occurrences of issues when Neuron devices fail by tailing monitoring logs. It marks the worker nodes in a defective Neuron device as unhealthy, and promptly replaces them with new worker nodes. By accelerating the speed of issue detection and remediation, it increases the reliability of your ML training and reduces the wasted time and cost due to hardware failure. This solution is applicable if you’re using managed nodes or self-managed node groups (which use Amazon EC2 Auto Scaling groups) on Amazon EKS. At the time of writing this post, automatic recovery of nodes provisioned by Karpenter is not yet supported. Solution overview The solution is based on the node problem detector and recovery DaemonSet, a powerful tool designed to automatically detect and report various node-level problems in a Kubernetes cluster. The node problem detector component will continuously monitor the kernel message (kmsg) logs on the worker nodes. If it detects error messages specifically related to the Neuron device (which is the Trainium or AWS Inferentia chip), it will change NodeCondition to NeuronHasError on the Kubernetes API server. The node recovery agent is a separate component that periodically checks the Prometheus metrics exposed by the node problem detector. When it finds a node condition indicating an issue with the Neuron device, it will...
Read More
Psychology

Episode 1: Jane the Brain and the Stress Mess

NIMH News Feed Jane has a big test coming up, and did we mention a science fair project too?? Learn more about how stress affects the brain and join Jane as she learns important skills like box breathing to help her manage stress. Go to Source 26/07/2024 - 11:36 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Mistral Large 2 is now available in Amazon Bedrock

AWS Machine Learning Blog Mistral AI’s Mistral Large 2 (24.07) foundation model (FM) is now generally available in Amazon Bedrock. Mistral Large 2 is the newest version of Mistral Large, and according to Mistral AI offers significant improvements across multilingual capabilities, math, reasoning, coding, and much more. In this post, we discuss the benefits and capabilities of this new model with some examples. Overview of Mistral Large 2 Mistral Large 2 is an advanced large language model (LLM) with state-of-the-art reasoning, knowledge, and coding capabilities according to Mistral AI. It is multi-lingual by design, supporting dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, Polish, Arabic, and Hindi. Per Mistral AI, a significant effort was also devoted to enhancing the model’s reasoning capabilities. One of the key focuses during training was to minimize the model’s tendency to hallucinate, or generate plausible-sounding but factually incorrect or irrelevant information. This was achieved by fine-tuning the model to be more cautious and discerning in its responses, making sure it provides reliable and accurate outputs. Additionally, the new Mistral Large 2 is trained to acknowledge when it can’t find solutions or doesn’t have sufficient information to provide a confident answer. According to Mistral AI, the model is also proficient in coding, trained on over 80 programming languages such as Python, Java, C, C++, JavaScript, Bash, Swift, and Fortran. With its best-in-class agentic capabilities, it can natively call functions and output JSON, enabling seamless interaction with external systems, APIs, and tools. Additionally, Mistral Large 2 (24.07) boasts advanced reasoning and mathematical capabilities, making it a powerful asset for tackling complex logical and computational challenges. Mistral Large 2 also offers an increased context window of 128,000 tokens. At the time of writing, the model (mistral.mistral-large-2407-v1:0) is available in the us-west-2 AWS...
Read More
Psychology

Youth With Conduct Disorder Show Widespread Differences in Brain Structure

NIMH News Feed The largest neuroimaging study of conduct disorder to date, with funding from NIH, has revealed extensive changes in brain structure among young people with the disorder. The largest difference was a smaller area of the brain’s outer layer, known as the cerebral cortex, which is critical for many aspects of behavior, cognition and emotion. Go to Source 26/07/2024 - 11:35 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Score for predicting dementia risk also may predict depression

PsycPORT™: Psychology Newswire The Brain Care Score is a tool for assessing dementia or stroke risk without medical procedures. Go to Source 26/07/2024 - 11:35 / Twitter: @hoffeldtcom
Read More
Vulnerable people with Covid struggling to access treatments in England, experts warn
Covid-19

Vulnerable people with Covid struggling to access treatments in England, experts warn

Coronavirus | The Guardian Responsibility for prescriptions moving to 42 integrated care boards has led to patients having to work out how to get treatment, often when illPeople at higher risk from Covid are struggling to get timely access to treatments such as antiviral drugs, charities, patients and doctors have warned amid a summer wave of the virus.People with certain health conditions or who meet other specific criteria are eligible for medications that can help the body fight the virus that causes Covid. They include those 85 years or older or who have Down’s syndrome, an organ transplant, a weakened immune system, lung cancer or sickle cell disease. Continue reading... Go to Source 26/07/2024 - 11:35 /Nicola Davis Science correspondent Twitter: @hoffeldtcom
Read More
Psychology

The pandemic caused an increase in teen eating disorders

PsycPORT™: Psychology Newswire Findings show increases in adolescents and young adults seeking both inpatient and outpatient care for an eating disorder in the aftermath of COVID-19. Go to Source 26/07/2024 - 11:35 / Twitter: @hoffeldtcom
Read More
Management

Nine Greater Cincinnati money management firms experienced acquisitions this year

Human Resources News - Human Resources News Headlines | Bizjournals.com In total, nine Greater Cincinnati money management firms either acquired – or were acquired by – another company from January to April this year. Go to Source 22/07/2024 - 13:43 /Isabella Ferrentino Twitter: @hoffeldtcom
Read More
Management

Atlanta private aviation company Volato Group hires president of key unit

Human Resources News - Human Resources News Headlines | Bizjournals.com Mark Ozenick will lead Volato Aircraft Management Services. Go to Source 19/07/2024 - 12:19 /Chris Fuhrmeister Twitter: @hoffeldtcom
Read More
Management

Lawsuit accusing ex-Centene execs of fraud dismissed

Human Resources News - Human Resources News Headlines | Bizjournals.com A Delaware judge dismissed a lawsuit alleging that a majority of Centene Corp.’s former Board of Directors breached their fiduciary duties by failing to oversee pharmacy benefit management (PBM) practices. Go to Source 17/07/2024 - 23:19 /James Drew Twitter: @hoffeldtcom
Read More
Management

Explore St. Louis fills new executive post ahead of president’s exit

Human Resources News - Human Resources News Headlines | Bizjournals.com The tourism and convention agency has hired a new C-suite executive to guide its sales, marketing and revenue management as it launches a search for its next president. Go to Source 17/07/2024 - 23:19 /Diana Barr Twitter: @hoffeldtcom
Read More
Management

Coleman Team to take over as president of Front Street Capital, Robin Team to become chairman

Human Resources News - Human Resources News Headlines | Bizjournals.com Robin Team, who founded the Winston-Salem commercial real estate development, investment and management firm in 1984, will become chairman of the board of directors and remain involved in the company's future. See who will take over daily leadership as the firm's managing partner. Go to Source 15/07/2024 - 18:53 /Elizabeth 'Lilly' Egan Twitter: @hoffeldtcom
Read More
Management

Big general contractor names new president

Human Resources News - Human Resources News Headlines | Bizjournals.com One of the St. Louis region's largest general contractors has named a new president who brings the company over 35 years of operations and management experience in the architecture, engineering and construction industry. Go to Source 13/07/2024 - 13:18 /Diana Barr Twitter: @hoffeldtcom
Read More
Management

Fenton-based marketing giant makes acquisition

Human Resources News - Human Resources News Headlines | Bizjournals.com The Fenton-based provider of events management and incentive programs has expanded with an acquisition. Go to Source 09/07/2024 - 00:13 /Diana Barr Twitter: @hoffeldtcom
Read More
Covid-19

Downtown Eastside could lose 2 public toilets as funding dries up

The city first installed the facilities, both operated by the Overdose Prevention Society, as a temporary response to the growing homelessness crisis and the COVID-19 pandemic.  Go to Source 02/07/2024 - 03:30 /Simon Little Twitter: @hoffeldtcom
Read More
Business News

Apple, Tesla lifts stocks to higher close in light pre-holiday trading

The Straits Times Business News NEW YORK - Megacap growth stocks led by Apple and Tesla lifted the tech-heavy Nasdaq to a higher close on July 1, while the Dow and the S&P 500 also eked out slight gains in light pre-holiday trading. Go to Source 02/07/2024 - 00:13 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Build a self-service digital assistant using Amazon Lex and Knowledge Bases for Amazon Bedrock

AWS Machine Learning Blog Organizations strive to implement efficient, scalable, cost-effective, and automated customer support solutions without compromising the customer experience. Generative artificial intelligence (AI)-powered chatbots play a crucial role in delivering human-like interactions by providing responses from a knowledge base without the involvement of live agents. These chatbots can be efficiently utilized for handling generic inquiries, freeing up live agents to focus on more complex tasks. Amazon Lex provides advanced conversational interfaces using voice and text channels. It features natural language understanding capabilities to recognize more accurate identification of user intent and fulfills the user intent faster. Amazon Bedrock simplifies the process of developing and scaling generative AI applications powered by large language models (LLMs) and other foundation models (FMs). It offers access to a diverse range of FMs from leading providers such as Anthropic Claude, AI21 Labs, Cohere, and Stability AI, as well as Amazon’s proprietary Amazon Titan models. Additionally, Knowledge Bases for Amazon Bedrock empowers you to develop applications that harness the power of Retrieval Augmented Generation (RAG), an approach where retrieving relevant information from data sources enhances the model’s ability to generate contextually appropriate and informed responses. The generative AI capability of QnAIntent in Amazon Lex lets you securely connect FMs to company data for RAG. QnAIntent provides an interface to use enterprise data and FMs on Amazon Bedrock to generate relevant, accurate, and contextual responses. You can use QnAIntent with new or existing Amazon Lex bots to automate FAQs through text and voice channels, such as Amazon Connect. With this capability, you no longer need to create variations of intents, sample utterances, slots, and prompts to predict and handle a wide range of FAQs. You can simply connect QnAIntent to company knowledge sources and the bot can immediately handle questions using the allowed content. In...
Read More
Artificial Intelligence

Identify idle endpoints in Amazon SageMaker

AWS Machine Learning Blog Amazon SageMaker is a machine learning (ML) platform designed to simplify the process of building, training, deploying, and managing ML models at scale. With a comprehensive suite of tools and services, SageMaker offers developers and data scientists the resources they need to accelerate the development and deployment of ML solutions. In today’s fast-paced technological landscape, efficiency and agility are essential for businesses and developers striving to innovate. AWS plays a critical role in enabling this innovation by providing a range of services that abstract away the complexities of infrastructure management. By handling tasks such as provisioning, scaling, and managing resources, AWS allows developers to focus more on their core business logic and iterate quickly on new ideas. As developers deploy and scale applications, unused resources such as idle SageMaker endpoints can accumulate unnoticed, leading to higher operational costs. This post addresses the issue of identifying and managing idle endpoints in SageMaker. We explore methods to monitor SageMaker endpoints effectively and distinguish between active and idle ones. Additionally, we walk through a Python script that automates the identification of idle endpoints using Amazon CloudWatch metrics. Identify idle endpoints with a Python script To effectively manage SageMaker endpoints and optimize resource utilization, we use a Python script that uses the AWS SDK for Python (Boto3) to interact with SageMaker and CloudWatch. This script automates the process of querying CloudWatch metrics to determine endpoint activity and identifies idle endpoints based on the number of invocations over a specified time period. Let’s break down the key components of the Python script and explain how each part contributes to the identification of idle endpoints: Global variables and AWS client initialization – The script begins by importing necessary modules and initializing global variables such as NAMESPACE, METRIC, LOOKBACK, and PERIOD. These variables...
Read More
Artificial Intelligence

Indian language RAG with Cohere multilingual embeddings and Anthropic Claude 3 on Amazon Bedrock

AWS Machine Learning Blog Media and entertainment companies serve multilingual audiences with a wide range of content catering to diverse audience segments. These enterprises have access to massive amounts of data collected over their many years of operations. Much of this data is unstructured text and images. Conventional approaches to analyzing unstructured data for generating new content rely on the use of keyword or synonym matching. These approaches don’t capture the full semantic context of a document, making them less effective for users’ search, content creation, and several other downstream tasks. Text embeddings use machine learning (ML) capabilities to capture the essence of unstructured data. These embeddings are generated by language models that map natural language text into their numerical representations and, in the process, encode contextual information in the natural language document. Generating text embeddings is the first step to many natural language processing (NLP) applications powered by large language models (LLMs) such as Retrieval Augmented Generation (RAG), text generation, entity extraction, and several other downstream business processes. Converting text to embeddings using cohere multilingual embedding model Despite the rising popularity and capabilities of LLMs, the language most often used to converse with the LLM, often through a chat-like interface, is English. And although progress has been made in adapting open source models to comprehend and respond in Indian languages, such efforts fall short of the English language capabilities displayed among larger, state-of-the-art LLMs. This makes it difficult to adopt such models for RAG applications based on Indian languages. In this post, we showcase a RAG application that can search and query across multiple Indian languages using the Cohere Embed – Multilingual model and Anthropic Claude 3 on Amazon Bedrock. This post focuses on Indian languages, but you can use the approach with other languages that are supported by...
Read More
Management

Using AI in Performance Management: The Keys for HR Success

15Five AI isn’t coming for your job, but it might help tons of people at your organization do better at theirs. How? As more and more HR tools adopt this technology, AI-powered performance management will soon become the norm. In a nutshell, performance management covers the practices and processes you use to track each employee’s performance, build realistic goals for them, and help them along their journey as they grow with your organization. It’s an essential part of managing your workforce, but it’s also labor-intensive. Here’s where AI comes in. What is AI-powered performance management? AI-powered performance management uses AI to enhance the way you empower employees to hit their goals. In some cases, that’ll mean using specialized tools that add AI functionality to processes you’re already using, like 15Five’s Spark AI assistant for HR leaders. In others, it’ll mean using AI tools that aren’t necessarily linked to HR processes (e.g. a chatbot like ChatGPT) in ways that support your performance management efforts. If some part of your performance management process is automated, you’re likely using AI, even if you don’t realize it. Let’s explore that further. Performance management using AI: key applications While there are some obvious applications of HR tools in performance management—like asking ChatGPT to write parts of your performance reviews—there are a lot of other ways they can be used. Finding top performers and struggling employees: As they work, your employees create tons of data. Data about how they meet their deadlines, who they’re collaborating with, and how much time they put towards specific tasks. AI tools can crawl through all of that data, turning it into insights managers can use to identify their top performers and help those who need it most. Analyzing employee strengths and weaknesses: Beyond just finding your top performers, AI can...
Read More
Business News

Chevron doctrine overturned: Republicans, big business praise Supreme Court decision

US Top News and Analysis The Supreme Court ruling overturned the case known as Chevron v. Natural Resources Defense Council, reducing the authority of federal regulatory agencies. Go to Source 28/06/2024 - 18:49 / Twitter: @hoffeldtcom
Read More
Management

How To Create a Workplace Culture That Truly Embraces Diversity

15Five Diversity is strength. Period. The numbers show that diverse teams perform better than their more homogeneous counterparts. But just like any other objective, there are right and wrong ways to build a workplace culture that promotes diversity. I’m Andrew Adeniyi, CEO of AAA Solutions and author of The Circle of Leadership: A Framework for Creating & Leveraging Culture. My mission is to empower leaders to build a culture that works for everyone, and I recently talked about this on the HR Superstars podcast. Company culture starts with leadership While a leader’s first instinct is often to empower their team to build a culture of diversity from the ground up in their daily actions, that’s not how you build culture. Author John Maxwell said it best in The 360 Degree Leader: “Everything rises and falls on leadership.” I’ve seen this as well in my work. In organizations with poor culture, you can typically point at leadership—or a lack thereof—as the main cause Employee engagement is essential to building and maintaining a company culture rooted in diversity and belonging, and companies with lackluster leadership generally have lower engagement. As a leader looking to build that culture, that starts with a statement about how committed you are to promoting diversity, equity, and inclusion while acknowledging that you don’t have it all figured out yet—but you’re going to keep trying. Feedback is a big part of this too, and leaders need to be ready to receive it. When Stefan Larson, former CEO of Ralph Lauren Polo, led Old Navy decades ago, he turned things around by going back to its core values. The biggest one? Innovation, a big part of why Old Navy was founded, wasn’t the norm when he became CEO. To make innovation the norm again, Larson incentivized open sharing through...
Read More
Management

Diverse perspectives: A business imperative — Table of Experts

Human Resources News - Human Resources News Headlines | Bizjournals.com In this panel discussion, Achieve Vice President of DEI and Learning and Development Henri’ Dawes discusses DEI in the workplace with Achieve professionals Andy Wadhwa, vice president of Product Management; Linda Luman, executive vice president of Human Resources; Heather Marcom, vice president of Talent Acquisition; and Jamal Williams, senior manager of New Client Enrollment for Sales. Henri’ Dawes: Hello everyone, and welcome to our panel discussion hosted by Achieve. My name is Henri’ Dawes,… Go to Source 28/06/2024 - 09:23 /Achieve Twitter: @hoffeldtcom
Read More
Psychology

Placebo Workshop: Translational Research Domains and Key Questions

NIMH News Feed The National Institute of Mental Health (NIMH) will host a virtual workshop on the placebo effect. The purpose of this workshop is to bring together experts in neurobiology, clinical trials, and regulatory science to examine placebo effects in drug, device, and psychosocial interventions for mental health conditions. Go to Source 28/06/2024 - 06:10 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

The future of productivity agents with NinjaTech AI and AWS Trainium

AWS Machine Learning Blog This is a guest post by Arash Sadrieh, Tahir Azim, and Tengfui Xue from NinjaTech AI. NinjaTech AI’s mission is to make everyone more productive by taking care of time-consuming complex tasks with fast and affordable artificial intelligence (AI) agents. We recently launched MyNinja.ai, one of the world’s first multi-agent personal AI assistants, to drive towards our mission. MyNinja.ai is built from the ground up using specialized agents that are capable of completing tasks on your behalf, including scheduling meetings, conducting deep research from the web, generating code, and helping with writing. These agents can break down complicated, multi-step tasks into branched solutions, and are capable of evaluating the generated solutions dynamically while continually learning from past experiences. All of these tasks are accomplished in a fully autonomous and asynchronous manner, freeing you up to continue your day while Ninja works on these tasks in the background, and engaging when your input is required. Because no single large language model (LLM) is perfect for every task, we knew that building a personal AI assistant would require multiple LLMs optimized specifically for a variety of tasks. In order to deliver the accuracy and capabilities to delight our users, we also knew that we would require these multiple models to work together in tandem. Finally, we needed scalable and cost-effective methods for training these various models—an undertaking that has historically been costly to pursue for most startups. In this post, we describe how we built our cutting-edge productivity agent NinjaLLM, the backbone of MyNinja.ai, using AWS Trainium chips. Building a dataset We recognized early that to deliver on the mission of tackling tasks on a user’s behalf, we needed multiple models that were optimized for specific tasks. Examples include our Deep Researcher, Deep Coder, and Advisor models. After...
Read More
Artificial Intelligence

Build generative AI applications on Amazon Bedrock — the secure, compliant, and responsible foundation

AWS Machine Learning Blog Generative AI has revolutionized industries by creating content, from text and images to audio and code. Although it can unlock numerous possibilities, integrating generative AI into applications demands meticulous planning. Amazon Bedrock is a fully managed service that provides access to large language models (LLMs) and other foundation models (FMs) from leading AI companies through a single API. It provides a broad set of tools and capabilities to help build generative AI applications. Starting today, I’ll be writing a blog series to highlight some of the key factors driving customers to choose Amazon Bedrock. One of the most important reason is that Bedrock enables customers to build a secure, compliant, and responsible foundation for generative AI applications. In this post, I explore how Amazon Bedrock helps address security and privacy concerns, enables secure model customization, accelerates auditability and incident response, and fosters trust through transparency and responsible AI. Plus, I’ll showcase real-world examples of companies building secure generative AI applications on Amazon Bedrock—demonstrating its practical applications across different industries. Listening to what our customers are saying During the past year, my colleague Jeff Barr, VP & Chief Evangelist at AWS, and I have had the opportunity to speak with numerous customers about generative AI. They mention compelling reasons for choosing Amazon Bedrock to build and scale their transformative generative AI applications. Jeff’s video highlights some of the key factors driving customers to choose Amazon Bedrock today. As you build and operationalize generative AI, it’s important not to lose sight of critically important elements—security, compliance, and responsible AI—particularly for use cases involving sensitive data. The OWASP Top 10 For LLMs outlines the most common vulnerabilities, but addressing these may require additional efforts including stringent access controls, data encryption, preventing prompt injection attacks, and compliance with policies. You want...
Read More
Business News

Ascott announces 6 key appointments in global expansion move

The Straits Times Business News Company charting new growth trajectory towards achieving target of over $500 million in fee revenue by 2028. Go to Source 28/06/2024 - 00:55 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Build a conversational chatbot using different LLMs within single interface – Part 1

AWS Machine Learning Blog With the advent of generative artificial intelligence (AI), foundation models (FMs) can generate content such as answering questions, summarizing text, and providing highlights from the sourced document. However, for model selection, there is a wide choice from model providers, like Amazon, Anthropic, AI21 Labs, Cohere, and Meta, coupled with discrete real-world data formats in PDF, Word, text, CSV, image, audio, or video. Amazon Bedrock is a fully managed service that makes it straightforward to build and scale generative AI applications. Amazon Bedrock offers a choice of high-performing FMs from leading AI companies, including AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon, through a single API. It enables you to privately customize FMs with your data using techniques such as fine-tuning, prompt engineering, and Retrieval Augmented Generation (RAG), and build agents that run tasks using your enterprise systems and data sources while complying with security and privacy requirements. In this post, we show you a solution for building a single interface conversational chatbot that allows end-users to choose between different large language models (LLMs) and inference parameters for varied input data formats. The solution uses Amazon Bedrock to create choice and flexibility to improve the user experience and compare the model outputs from different options. The entire code base is available in GitHub, along with an AWS CloudFormation template. What is RAG Retrieval Augmented Generation (RAG) can enhance the generation process by using the benefits of retrieval, enabling a natural language generation model to produce more informed and contextually appropriate responses. By incorporating relevant information from retrieval into the generation process, RAG aims to improve the accuracy, coherence, and informativeness of the generated content. Implementing an effective RAG system requires several key components working in harmony: Foundation models – The foundation of a RAG architecture is...
Read More
Covid-19

CRA says legal action coming to recover COVID benefit overpayments

Starting in July, Canada Revenue Agency said it will begin issuing legal warnings and could start to take steps to recover overpayments of all COVID-19 programs Go to Source 27/06/2024 - 19:23 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Melissa Choi named director of MIT Lincoln Laboratory

MIT News - Artificial intelligence Melissa Choi has been named the next director of MIT Lincoln Laboratory, effective July 1. Currently assistant director of the laboratory, Choi succeeds Eric Evans, who will step down on June 30 after 18 years as director.Sharing the news in a letter to MIT faculty and staff today, Vice President for Research Ian Waitz noted Choi’s 25-year career of “outstanding technical and advisory leadership,” both at MIT and in service to the defense community.“Melissa has a marvelous technical breadth as well as excellent leadership and management skills, and she has presented a compelling strategic vision for the Laboratory,” Waitz wrote. “She is a thoughtful, intuitive leader who prioritizes communication, collaboration, mentoring, and professional development as foundations for an organizational culture that advances her vision for Lab-wide excellence in service to the nation.”Choi’s appointment marks a new chapter in Lincoln Laboratory’s storied history working to keep the nation safe and secure. As a federally funded research and development center operated by MIT for the Department of Defense, the laboratory has provided the government an independent perspective on critical science and technology issues of national interest for more than 70 years. Distinctive among national R&D labs, the laboratory specializes in both long-term system development and rapid demonstration of operational prototypes, to protect and defend the nation against advanced threats. In tandem with its role in developing technology for national security, the laboratory’s integral relationship with the MIT campus community enables impactful partnerships on fundamental research, teaching, and workforce development in critical science and technology areas.“In a time of great global instability and fast-evolving threats, the mission of Lincoln Laboratory has never been more important to the nation,” says MIT President Sally Kornbluth. “It is also vital that the laboratory apply government-funded, cutting-edge technologies to solve critical problems in fields...
Read More
Business News

Insurer Prudential to get Citi banker as new Singapore chief

The Straits Times Business News Ms Chan San San has been appointed to head Prudential’s Singapore business in about two months’ time. Go to Source 27/06/2024 - 15:30 / Twitter: @hoffeldtcom
Read More
Covid-19

LHSC reports $78-million deficit in 2023-24 fiscal year

LHSC is facing major budget pressures due to continued pandemic recovery among other factors, which resulted in $2 million more than projected in the deficit. Go to Source 26/06/2024 - 21:54 /Emily Passfield Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Automate derivative confirms processing using AWS AI services for the capital markets industry

AWS Machine Learning Blog Capital markets operation teams face numerous challenges throughout the post-trade lifecycle, including delays in trade settlements, booking errors, and inaccurate regulatory reporting. For derivative trades, it’s even more challenging. The timely settlement of derivative trades is an onerous task. This is because trades involve different counterparties and there is a high degree of variation among documents containing commercial terms (such as trade date, value date, and counterparties). We commonly see the application of screen scrapping solutions with OCR in capital market organizations. These applications come with the drawback of being inflexible and high-maintenance. Artificial intelligence and machine learning (AI/ML) technologies can assist capital market organizations overcome these challenges. Intelligent document processing (IDP) applies AI/ML techniques to automate data extraction from documents. Using IDP can reduce or eliminate the requirement for time-consuming human reviews. IDP has the power to transform the way capital market back-office operations work. It has the potential to boost employee efficiency, enhance cash flow by speeding up trade settlements, and minimize operational and regulatory risks. In this post, we show how you can automate and intelligently process derivative confirms at scale using AWS AI services. The solution combines Amazon Textract, a fully managed ML service to effortlessly extract text, handwriting, and data from scanned documents, and AWS Serverless technologies, a suite of fully managed event-driven services for running code, managing data, and integrating applications, all without managing servers. Solution overview The lifecycle of a derivative trade involves multiple phases, from trade research to execution, to clearing and settlement. The solution showcased in this post focuses on the trade clearing and settlement phase of the derivative trade lifecycle. During this phase, counterparties to the trade and their agents determine and verify the exact commercial terms of the transaction and prepare for settlement. The following...
Read More
Artificial Intelligence

AI-powered assistants for investment research with multi-modal data: An application of Agents for Amazon Bedrock

AWS Machine Learning Blog This post is a follow-up to Generative AI and multi-modal agents in AWS: The key to unlocking new value in financial markets. This blog is part of the series, Generative AI and AI/ML in Capital Markets and Financial Services. Financial analysts and research analysts in capital markets distill business insights from financial and non-financial data, such as public filings, earnings call recordings, market research publications, and economic reports, using a variety of tools for data mining. They face many challenges because of the increasing variety of tools and amount of data. They must synthesize massive amounts of data from multiple sources, qualitative and quantitative, to provide insights and recommendations. Analysts need to learn new tools and even some programming languages such as SQL (with different variations). To add to these challenges, they must think critically under time pressure and perform their tasks quickly to keep up with the pace of the market. Investment research is the cornerstone of successful investing, and involves gathering and analyzing relevant information about potential investment opportunities. Through thorough research, analysts come up with a hypothesis, test the hypothesis with data, and understand the effect before portfolio managers make decisions on investments as well as mitigate risks associated with their investments. Artificial intelligence (AI)-powered assistants can boost the productivity of a financial analysts, research analysts, and quantitative trading in capital markets by automating many of the tasks, freeing them to focus on high-value creative work. AI-powered assistants can amplify an analyst’s productivity by searching for relevant information in the customer’s own database as well as online, conducting qualitative and quantitative analysis on structured and unstructured data, enabling analysts to work faster and with greater accuracy. In this post, we introduce a solution using Agents for Amazon Bedrock and Knowledge Bases for Amazon...
Read More
Management

How to Implement Artificial Intelligence in Human Resources

15Five The AI furor might be long past, but the tools are here to stay. Tons of tools HR teams are already using have AI features built right in, so you might be using them without realizing it. But whether your team hasn’t implemented AI in their workflows yet or you want to get more out of what you’re already doing, here are a few things to consider. For one, there are four main types of artificial intelligence HR teams can work with: Generative AI tools use massive banks of data to create human-equivalent text, images, and other types of content. Machine learning platforms analyze data to pull out insights that can teach machines to perform tasks the way a human might. Natural language processing tools can review the text people write and draw conclusions like the sentiment or ideas it expresses. Predictive analytics is exactly what it sounds like. These tools use historical data to pick up on trends and help their users predict how these might change in the future. These AI tools can be used independently (like ChatGPT for generative AI) or be bundled into existing HR tools (like 15Five’s Spark for natural language processing). Now let’s cover everything else you need to know about using AI in HR processes. How is artificial intelligence used in human resources? AI can help streamline all sorts of processes, saving you time, improving employee experience, and freeing up HR resources for more important tasks. Here’s a quick list of ways HR teams can use AI tools in their everyday work. Candidate pre-screening When you’re getting dozens of applications across multiple job postings each week, you need a bit of a helping hand. Automated candidate pre-screening has already been in use by HR teams for years, and it’s exactly what it...
Read More
Artificial Intelligence

AI21 Labs Jamba-Instruct model is now available in Amazon Bedrock

AWS Machine Learning Blog We are excited to announce the availability of the Jamba-Instruct large language model (LLM) in Amazon Bedrock. Jamba-Instruct is built by AI21 Labs, and most notably supports a 256,000-token context window, making it especially useful for processing large documents and complex Retrieval Augmented Generation (RAG) applications. What is Jamba-Instruct Jamba-Instruct is an instruction-tuned version of the Jamba base model, previously open sourced by AI21 Labs, which combines a production grade-model, Structured State Space (SSM) technology, and Transformer architecture. With the SSM approach, Jamba-Instruct is able to achieve the largest context window length in its model size class while also delivering the performance traditional transformer-based models provide. These models yield a performance boost over AI21’s previous generation of models, the Jurassic-2 family of models. For more information about the hybrid SSM/Transformer architecture, refer to the Jamba: A Hybrid Transformer-Mamba Language Model whitepaper. Get started with Jamba-Instruct To get started with Jamba-Instruct models in Amazon Bedrock, first you need to get access to the model. On the Amazon Bedrock console, choose Model access in the navigation pane. Choose Modify model access. Select the AI21 Labs models you want to use and choose Next. Choose Submit to request model access. For more information, refer to Model access. Next, you can test the model either in the Amazon Bedrock Text or Chat playground. Example use cases for Jamba-Instruct Jamba-Instruct’s long context length is particularly well-suited for complex Retrieval Augmented Generation (RAG) workloads, or potentially complex document analysis. For example, it would be suitable for detecting contradictions between different documents or analyzing one document in the context of another. The following is an example prompt suitable for this use case: You are an expert research assistant; you are to note any contradictions between the first document and second document provided: Document...
Read More
Artificial Intelligence

Scale and simplify ML workload monitoring on Amazon EKS with AWS Neuron Monitor container

AWS Machine Learning Blog Amazon Web Services is excited to announce the launch of the AWS Neuron Monitor container, an innovative tool designed to enhance the monitoring capabilities of AWS Inferentia and AWS Trainium chips on Amazon Elastic Kubernetes Service (Amazon EKS). This solution simplifies the integration of advanced monitoring tools such as Prometheus and Grafana, enabling you to set up and manage your machine learning (ML) workflows with AWS AI Chips. With the new Neuron Monitor container, you can visualize and optimize the performance of your ML applications, all within a familiar Kubernetes environment. The Neuron Monitor container can also run on Amazon Elastic Container Service (Amazon ECS), but for the purpose of this post, we primarily discuss Amazon EKS deployment. In addition to the Neuron Monitor container, the release of CloudWatch Container Insights (for Neuron) provides further benefits. This extension provides a robust monitoring solution, offering deeper insights and analytics tailored specifically for Neuron-based applications. With Container Insights, you can now access more granular data and comprehensive analytics, making it effortless for developers to maintain high performance and operational health of their ML workloads. Solution overview The Neuron Monitor container solution provides a comprehensive monitoring framework for ML workloads on Amazon EKS, using the power of Neuron Monitor in conjunction with industry-standard tools like Prometheus, Grafana, and Amazon CloudWatch. By deploying the Neuron Monitor DaemonSet across EKS nodes, developers can collect and analyze performance metrics from ML workload pods. In one flow, metrics gathered by Neuron Monitor are integrated with Prometheus, which is configured using a Helm chart for scalability and ease of management. These metrics are then visualized through Grafana, offering you detailed insights into your applications’ performance for effective troubleshooting and optimization. Alternatively, metrics can also be directed to CloudWatch through the CloudWatch Observability EKS add-on...
Read More
Artificial Intelligence

Build an automated insight extraction framework for customer feedback analysis with Amazon Bedrock and Amazon QuickSight

AWS Machine Learning Blog Extracting valuable insights from customer feedback presents several significant challenges. Manually analyzing and categorizing large volumes of unstructured data, such as reviews, comments, and emails, is a time-consuming process prone to inconsistencies and subjectivity. Scalability becomes an issue as the amount of feedback grows, hindering the ability to respond promptly and address customer concerns. In addition, capturing granular insights, such as specific aspects mentioned and associated sentiments, is difficult. Inefficient routing and prioritization of customer inquiries or issues can lead to delays and dissatisfaction. These pain points highlight the need to streamline the process of extracting insights from customer feedback, enabling businesses to make data-driven decisions and enhance the overall customer experience. Large language models (LLMs) have transformed the way we engage with and process natural language. These powerful models can understand, generate, and analyze text, unlocking a wide range of possibilities across various domains and industries. From customer service and ecommerce to healthcare and finance, the potential of LLMs is being rapidly recognized and embraced. Businesses can use LLMs to gain valuable insights, streamline processes, and deliver enhanced customer experiences. Unlike traditional natural language processing (NLP) approaches, such as classification methods, LLMs offer greater flexibility in adapting to dynamically changing categories and improved accuracy by using pre-trained knowledge embedded within the model. Amazon Bedrock, a fully managed service designed to facilitate the integration of LLMs into enterprise applications, offers a choice of high-performing LLMs from leading artificial intelligence (AI) companies like Anthropic, Mistral AI, Meta, and Amazon through a single API. It provides a broad set of capabilities like model customization through fine-tuning, knowledge base integration for contextual responses, and agents for running complex multi-step tasks across systems. With Amazon Bedrock, developers can experiment, evaluate, and deploy generative AI applications without worrying about infrastructure management....
Read More
Artificial Intelligence

Build safe and responsible generative AI applications with guardrails

AWS Machine Learning Blog Large language models (LLMs) enable remarkably human-like conversations, allowing builders to create novel applications. LLMs find use in chatbots for customer service, virtual assistants, content generation, and much more. However, the implementation of LLMs without proper caution can lead to the dissemination of misinformation, manipulation of individuals, and the generation of undesirable outputs such as harmful slurs or biased content. Enabling guardrails plays a crucial role in mitigating these risks by imposing constraints on LLM behaviors within predefined safety parameters. This post aims to explain the concept of guardrails, underscore their importance, and covers best practices and considerations for their effective implementation using Guardrails for Amazon Bedrock or other tools. Introduction to guardrails for LLMs The following figure shows an example of a dialogue between a user and an LLM. As demonstrated in this example, LLMs are capable of facilitating highly natural conversational experiences. However, it’s also clear that LLMs without appropriate guardrail mechanisms can be problematic. Consider the following levels of risk when building or deploying an LLM-powered application: User-level risk – Conversations with an LLM may generate responses that your end-users find offensive or irrelevant. Without appropriate guardrails, your chatbot application may also state incorrect facts in a convincing manner, a phenomenon known as hallucination. Additionally, the chatbot could go as far as providing ill-advised life or financial recommendations when you don’t take measures to restrict the application domain. Business-level risk – Conversations with a chatbot might veer off-topic into open-ended and controversial subjects that are irrelevant to your business needs or even harmful to your company’s brand. An LLM deployed without guardrails might also create a vulnerability risk for you or your organization. Malicious actors might attempt to manipulate your LLM application into exposing confidential or protected information, or harmful outputs. To mitigate...
Read More
Artificial Intelligence

Improve visibility into Amazon Bedrock usage and performance with Amazon CloudWatch

AWS Machine Learning Blog Amazon Bedrock has enabled customers to build new delightful experiences for their customers using generative artificial intelligence (AI). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities that you need to build generative AI applications with security, privacy, and responsible AI. With some of the best FMs available at their fingertips within Amazon Bedrock, customers are experimenting and innovating faster than ever before. As customers look to operationalize these new generative AI applications, they also need prescriptive, out-of-the-box ways to monitor the health and performance of these applications. In this blog post, we will share some of capabilities to help you get quick and easy visibility into Amazon Bedrock workloads in context of your broader application. We will use the contextual conversational assistant example in the Amazon Bedrock GitHub repository to provide examples of how you can customize these views to further enhance visibility, tailored to your use case. Specifically, we will describe how you can use the new automatic dashboard in Amazon CloudWatch to get a single pane of glass visibility into the usage and performance of Amazon Bedrock models and gain end-to-end visibility by customizing dashboards with widgets that provide visibility and insights into components and operations such as Retrieval Augmented Generation in your application. Announcing Amazon Bedrock automatic dashboard in CloudWatch CloudWatch has automatic dashboards for customers to quickly gain insights into the health and performance of their AWS services. A new automatic dashboard for Amazon Bedrock was added to provide insights into key metrics for Amazon Bedrock models. To access the new automatic dashboard from the AWS Management Console:...
Read More
Covid-19

PPE worth £1.4bn from single Covd deal destroyed or written off

Coronavirus | The Guardian UK government deal struck at height of pandemic described as ‘colossal misuse of public funds’An estimated £1.4bn worth of personal protective equipment (PPE) bought by the government in single a deal has been destroyed or written off, according to new figures described as the worst example of waste in the Covid pandemic.The figures obtained by the BBC under freedom of information laws showed that 1.57bn items from the NHS supplier Full Support Healthcare will never been used. Continue reading... Go to Source 25/06/2024 - 16:21 /Matthew Weaver Twitter: @hoffeldtcom
Read More
Covid-19

Will there be more air travel chaos this summer?

BBC News Air travel is booming, but last year delays were much worse than pre-pandemic. Will 2024 be the same? Go to Source 25/06/2024 - 09:21 / Twitter: @hoffeldtcom
Read More
Business News

Oracle warns that a TikTok ban would hurt business

US Top News and Analysis Oracle provides cloud services to TikTok, and is warning investors that a ban of the app could hurt the company's revenue. Go to Source 25/06/2024 - 00:28 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Implement exact match with Amazon Lex QnAIntent

AWS Machine Learning Blog This post is a continuation of Creating Natural Conversations with Amazon Lex QnAIntent and Amazon Bedrock Knowledge Base. In summary, we explored new capabilities available through Amazon Lex QnAIntent, powered by Amazon Bedrock, that enable you to harness natural language understanding and your own knowledge repositories to provide real-time, conversational experiences. In many cases, Amazon Bedrock is able to generate accurate responses that meet the needs for a wide variety of questions and scenarios, using your knowledge content. However, some enterprise customers have regulatory requirements or more rigid brand guidelines, requiring certain questions to be answered verbatim with pre-approved responses. For these use cases, Amazon Lex QnAIntent provides exact match capabilities with both Amazon Kendra and Amazon OpenSearch Service knowledge bases. In this post, we walk through how to set up and configure an OpenSearch Service cluster as the knowledge base for your Amazon Lex QnAIntent. In addition, exact match works with Amazon Kendra, and you can create an index and add frequently asked questions to your index. As detailed in Part 1 of this series, you can then select Amazon Kendra as your knowledge base under Amazon Lex QnA Configurations, provide your Amazon Kendra index ID, and select the exact match to let your bot return the exact response returned by Amazon Kendra. Solution Overview In the following sections, we walk through the steps to create an OpenSearch Service domain, create an OpenSearch index and populate it with documents, and test the Amazon Lex bot with QnAIntent. Prerequisites Before creating an OpenSearch Service cluster, you need to create an Amazon Lex V2 bot. If you don’t have an Amazon Lex V2 bot available, complete the following steps: On the Amazon Lex console, choose Bots in the navigation pane. Choose Create bot. Select Start with an...
Read More
Management

How to Keep Employees Engaged When You’re Short-Staffed

15Five Short-staffed organizations don’t have enough people to handle all the work that needs to get done. When that goes on for too long, employees start taking notice and employee engagement suffers. Watching important tasks go undone, having to work through lunch to hit deadlines, and feeling like promotion opportunities have gone the way of the dodo all take their toll on your workforce. No matter how much work you put into your hiring plan or how closely you watch attrition and turnover, your organization will likely be short-staffed at one point or another. When this is the case, you’ll have a larger part to play in maintaining employee engagement on top of an increase in requests and complaints from employees. That’s why HR professionals and managers need to work together to proactively develop employee engagement strategies that can see the organization through these periods. Without a plan in place, employee engagement will continue to steadily decrease. If you’re short-staffed long enough, morale will start to go down, too. Employee engagement software can help with this, but you still need the right plan. Here’s a comprehensive guide on how being short-staffed affects your business and what HR professionals can do to keep engagement from slipping. How does being short-staffed affect your business? Whether it’s due to a wave of layoffs, high turnover, or broader market conditions, short-staffed organizations quickly start to feel the pinch of their situation. After all, your employees might not necessarily know you’re short-staffed with any certainty, but they can quickly recognize the signs, such as: Too much work and not enough people to do it. Increased responsibilities without promises of advancement or promotion. Shelving of once-important projects after reprioritization. Drastic changes in the organization’s overall strategy. Increased monitoring and micromanagement from leaders. Being short-staffed doesn’t just...
Read More
Artificial Intelligence

How Krikey AI harnessed the power of Amazon SageMaker Ground Truth to accelerate generative AI development

AWS Machine Learning Blog This post is co-written with Jhanvi Shriram and Ketaki Shriram from Krikey. Krikey AI is revolutionizing the world of 3D animation with their innovative platform that allows anyone to generate high-quality 3D animations using just text or video inputs, without needing any prior animation experience. At the core of Krikey AI’s offering is their powerful foundation model trained to understand human motion and translate text descriptions into realistic 3D character animations. However, building such a sophisticated artificial intelligence (AI) model requires tremendous amounts of high-quality training data. Krikey AI faced the daunting task of labeling a vast amount of data input containing body motions with descriptive text labels. Manually labeling this dataset in-house was impractical and prohibitively expensive for the startup. But without these rich labels, their customers would be severely limited in the animations they could generate from text inputs. Amazon SageMaker Ground Truth is an AWS managed service that makes it straightforward and cost-effective to get high-quality labeled data for machine learning (ML) models by combining ML and expert human annotation. Krikey AI used SageMaker Ground Truth to expedite the development and implementation of their text-to-animation model. SageMaker Ground Truth provided and managed the labeling workforce, provided advanced data labeling workflows, and automated workflows for human-in-the-loop tasks, enabling Krikey AI to efficiently source precise labels tailored to their needs. SageMaker Ground Truth Implementation As a small startup working to democratize 3D animation through AI, Krikey AI faced the challenge of preparing a large labeled dataset to train their text-to-animation model. Manually labeling each data input with descriptive annotations proved incredibly time-consuming and impractical to do in-house at scale. With customer demand rapidly growing for their AI animation services, Krikey AI needed a way to quickly obtain high-quality labels across diverse and broad categories. Not...
Read More
Business News

Target taps Shopify to add sellers to its third-party marketplace

US Top News and Analysis Target is partnering with Shopify as it looks for ways to drive online traffic and get back to sales growth. Go to Source 24/06/2024 - 12:33 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Helping nonexperts build advanced generative AI models

MIT News - Artificial intelligence The impact of artificial intelligence will never be equitable if there’s only one company that builds and controls the models (not to mention the data that go into them). Unfortunately, today’s AI models are made up of billions of parameters that must be trained and tuned to maximize performance for each use case, putting the most powerful AI models out of reach for most people and companies.MosaicML started with a mission to make those models more accessible. The company, which counts Jonathan Frankle PhD ’23 and MIT Associate Professor Michael Carbin as co-founders, developed a platform that let users train, improve, and monitor open-source models using their own data. The company also built its own open-source models using graphical processing units (GPUs) from Nvidia.The approach made deep learning, a nascent field when MosaicML first began, accessible to far more organizations as excitement around generative AI and large language models (LLMs) exploded following the release of Chat GPT-3.5. It also made MosaicML a powerful complementary tool for data management companies that were also committed to helping organizations make use of their data without giving it to AI companies.Last year, that reasoning led to the acquisition of MosaicML by Databricks, a global data storage, analytics, and AI company that works with some of the largest organizations in the world. Since the acquisition, the combined companies have released one of the highest performing open-source, general-purpose LLMs yet built. Known as DBRX, this model has set new benchmarks in tasks like reading comprehension, general knowledge questions, and logic puzzles.Since then, DBRX has gained a reputation for being one of the fastest open-source LLMs available and has proven especially useful at large enterprises.More than the model, though, Frankle says DBRX is significant because it was built using Databricks tools, meaning...
Read More
Artificial Intelligence

Manage Amazon SageMaker JumpStart foundation model access with private hubs

AWS Machine Learning Blog Amazon SageMaker JumpStart is a machine learning (ML) hub offering pre-trained models and pre-built solutions. It provides access to hundreds of foundation models (FMs). A private hub is a feature in SageMaker JumpStart that allows an organization to share their models and notebooks so as to centralize model artifacts, facilitate discoverability, and increase the reuse within the organization. With new models released daily, many enterprise admins want more control over the FMs that can be discovered and used by users within their organization (for example, only allowing models based on pytorch framework to be discovered). Now enterprise admins can effortlessly configure granular access control over the FMs that SageMaker JumpStart provides out of box so that only allowed models can be accessed by users within their organizations. In this post, we discuss the steps required for an administrator to configure granular access control of models in SageMaker JumpStart using a private hub, as well as the steps for users to access and consume models from the private hub. Solution overview Starting today, with SageMaker JumpStart and its private hub feature, administrators can create repositories for a subset of models tailored to different teams, use cases, or license requirements using the Amazon SageMaker Python SDK. Admins can also set up multiple private hubs with different lists of models discoverable for different groups of users. Users are then only able to discover and use models within the private hubs they have access to through Amazon SageMaker Studio and the SDK. This level of control empowers enterprises to consume the latest in open weight generative artificial intelligence (AI) development while enforcing governance guardrails. Finally, admins can share access to private hubs across multiple AWS accounts, enabling collaborative model management while maintaining centralized control. SageMaker JumpStart uses AWS Resource Access...
Read More
Management

Subjective vs. Objective Performance Review Feedback: Which Works Better?

15Five Managing employees is more than a quick conversation and vague goal setting. To maximize individual and team performance, you need to learn how to deliver performance review feedback that helps people clearly understand their strengths, weaknesses, areas for improvement, and, most importantly, how they can succeed in their job roles.  There are two types of feedback in the workplace: objective and subjective. Each has a purpose for delivering effective employee feedback during a performance review, but the type of feedback you share will depend on the subject matter and the employee.  Objective feedback helps employees clearly see their performance in relation to metrics, while subjective feedback acknowledges soft skills and unique contributions that can’t be measured by numbers or metrics. For example, a data scientist is more likely to receive objective feedback, whereas a creative copywriter is more likely to hear subjective feedback. Understanding the differences: subjective vs. objective feedback Objective feedback Objective feedback is based on observable and measurable facts. It aims to be unbiased, impartial, and free of personal feelings and opinions while using quantitative statistics and data to form the basis of the feedback. The goal of objective feedback is to provide insights that are: Clear Specific Actionable Fair Measurable By ensuring that feedback is clear and specific, there’s no “gray area,” and employees should understand exactly what they’re doing well and areas that need improvement. Measurable objective feedback also promotes consistency and fairness while helping minimize biases because employees are all measured on the same criteria. However, objective feedback sometimes overlooks performance management’s personal or emotional aspects. It can also come across as too structured by neglecting creativity or interpersonal skills. Relying too heavily on data can also lead to missing situational context, which can impact employee happiness because individuals whose contributions aren’t easily measured...
Read More
Artificial Intelligence

eSentire delivers private and secure generative AI interactions to customers with Amazon SageMaker

AWS Machine Learning Blog eSentire is an industry-leading provider of Managed Detection & Response (MDR) services protecting users, data, and applications of over 2,000 organizations globally across more than 35 industries. These security services help their customers anticipate, withstand, and recover from sophisticated cyber threats, prevent disruption from malicious attacks, and improve their security posture. In 2023, eSentire was looking for ways to deliver differentiated customer experiences by continuing to improve the quality of its security investigations and customer communications. To accomplish this, eSentire built AI Investigator, a natural language query tool for their customers to access security platform data by using AWS generative artificial intelligence (AI) capabilities. In this post, we share how eSentire built AI Investigator using Amazon SageMaker to provide private and secure generative AI interactions to their customers. Benefits of AI Investigator Before AI Investigator, customers would engage eSentire’s Security Operation Center (SOC) analysts to understand and further investigate their asset data and associated threat cases. This involved manual effort for customers and eSentire analysts, forming questions and searching through data across multiple tools to formulate answers. eSentire’s AI Investigator enables users to complete complex queries using natural language by joining multiple sources of data from each customer’s own security telemetry and eSentire’s asset, vulnerability, and threat data mesh. This helps customers quickly and seamlessly explore their security data and accelerate internal investigations. Providing AI Investigator internally to the eSentire SOC workbench has also accelerated eSentire’s investigation process by improving the scale and efficacy of multi-telemetry investigations. The LLM models augment SOC investigations with knowledge from eSentire’s security experts and security data, enabling higher-quality investigation outcomes while also reducing time to investigate. Over 100 SOC analysts are now using AI Investigator models to analyze security data and provide rapid investigation conclusions. Solution overview eSentire customers expect...
Read More
Business News

SingPost appoints financial adviser for strategic review of Australia businesses

The Straits Times Business News It plans to achieve scale in Australia by exploring near-term partnerships that can contribute to growth. Go to Source 21/06/2024 - 03:14 / Twitter: @hoffeldtcom
Read More
Management

What is Continuous Performance Management?

15Five Your annual performance reviews aren’t cutting it anymore. Only 14% of your employees strongly agree that their performance reviews inspire them to improve, according to Gallup data. Traditional performance management has long been a top-down process, where employees would meet with their manager one to four times a year. This mostly involved managers talking “at them” about their strengths, weaknesses, and where they needed to improve before the next review. The result of these reviews? A one-sentence description of the employee’s performance: exceeds expectations, meets expectations, or does not meet expectations. Employees might also get an answer about whether you can expect a raise in the future or not. No wonder they don’t work. By contrast, continuous performance management fills the gaps between your annual or quarterly performance reviews while encouraging two-way conversations instead of monologues.  More frequent check-ins—sometimes no less than a month apart—allow managers to guide employees along their growth plan and give feedback they can actually work with. Even better, some organizations using this sort of performance management share feedback as often as daily, giving employees a better understanding of their performance on day-to-day work. Continuous performance management also involves using the appropriate tools to track performance over time, saving the manual work that would usually be involved in this process. So why continuous performance management? And how should your organization implement it? Let’s dive in. 8 benefits of continuous performance management Continuous performance management comes with many benefits, especially when compared to its traditional counterpart. But don’t be fooled; employees aren’t the only ones who benefit. Your organization can see some significant improvements from implementing this process. Here are just a few of them. Better employee engagement: As noted previously, few employees find traditional performance management particularly inspiring. Conversely, getting the daily feedback that’s usually...
Read More
Artificial Intelligence

Eric Evans receives Department of Defense Medal for Distinguished Public Service

MIT News - Artificial intelligence On May 31, the U.S. Department of Defense's chief technology officer, Under Secretary of Defense for Research and Engineering Heidi Shyu, presented Eric Evans with the Department of Defense (DoD) Medal for Distinguished Public Service. This award is the highest honor given by the secretary of defense to private citizens for their significant service to the DoD. Evans was selected for his leadership as director of MIT Lincoln Laboratory and as vice chair and chair of the Defense Science Board (DSB)."I have gotten to know Eric well in the last three years, and I greatly appreciate his leadership, proactiveness, vision, intellect, and humbleness," Shyu stated in her remarks during the May 31 ceremony held at the laboratory. "Eric has a willingness and ability to confront and solve the most difficult problems for national security. His distinguished public service will continue to have invaluable impacts on the department and the nation for decades to come." During his tenure in both roles over more than a decade, Evans has cultivated relationships at the highest levels within the DoD. Since stepping into his role as laboratory director in 2006, he has advised eight defense secretaries and seven deputy defense secretaries. Under his leadership, the laboratory delivered advanced capabilities for national security in a broad range of technology areas, including cybersecurity, space surveillance, biodefense, artificial intelligence, laser communications, and quantum computing.Evans ensured that the laboratory addressed not only existing DoD priorities, but also emerging and future threats. He foresaw the need for and established three new technical divisions covering Cyber Security and Information Sciences, Homeland Protection, and Biotechnology and Human Systems. When the Covid-19 pandemic struck, he quickly pivoted the laboratory to aid the national response. To ensure U.S. competitiveness in an ever-evolving defense landscape, he advocated for the modernization of major...
Read More
Artificial Intelligence

Imperva optimizes SQL generation from natural language using Amazon Bedrock

AWS Machine Learning Blog This is a guest post co-written with Ori Nakar from Imperva. Imperva Cloud WAF protects hundreds of thousands of websites against cyber threats and blocks billions of security events every day. Counters and insights based on security events are calculated daily and used by users from multiple departments. Millions of counters are added daily, together with 20 million insights updated daily to spot threat patterns. Our goal was to improve the user experience of an existing application used to explore the counters and insights data. The data is stored in a data lake and retrieved by SQL using Amazon Athena. As part of our solution, we replaced multiple search fields with a single free text field. We used a large language model (LLM) with query examples to make the search work using the language used by Imperva internal users (business analysts). The following figure shows a search query that was translated to SQL and run. The results were later formatted as a chart by the application. We have many types of insights—global, industry, and customer level insights used by multiple departments such as marketing, support, and research. Data was made available to our users through a simplified user experience powered by an LLM. Figure 1: Insights search by natural language Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral, Stability AI, and Amazon within a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Amazon Bedrock Studio is a new single sign-on (SSO)-enabled web interface that provides a way for developers across an organization to experiment with LLMs and other FMs, collaborate...
Read More
Artificial Intelligence

Create natural conversations with Amazon Lex QnAIntent and Knowledge Bases for Amazon Bedrock

AWS Machine Learning Blog Customer service organizations today face an immense opportunity. As customer expectations grow, brands have a chance to creatively apply new innovations to transform the customer experience. Although meeting rising customer demands poses challenges, the latest breakthroughs in conversational artificial intelligence (AI) empowers companies to meet these expectations. Customers today expect timely responses to their questions that are helpful, accurate, and tailored to their needs. The new QnAIntent, powered by Amazon Bedrock, can meet these expectations by understanding questions posed in natural language and responding conversationally in real time using your own authorized knowledge sources. Our Retrieval Augmented Generation (RAG) approach allows Amazon Lex to harness both the breadth of knowledge available in repositories as well as the fluency of large language models (LLMs). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. In this post, we show you how to add generative AI question answering capabilities to your bots. This can be done using your own curated knowledge sources, and without writing a single line of code. Read on to discover how QnAIntent can transform your customer experience. Solution overview Implementing the solution consists of the following high-level steps: Create an Amazon Lex bot. Create an Amazon Simple Storage Service (Amazon S3) bucket and upload a PDF file that contains the information used to answer questions. Create a knowledge base that will split your data into chunks and generate embeddings using the Amazon Titan Embeddings model. As part of this process, Knowledge Bases for Amazon Bedrock automatically creates an Amazon OpenSearch...
Read More
Artificial Intelligence

Evaluate the reliability of Retrieval Augmented Generation applications using Amazon Bedrock

AWS Machine Learning Blog Retrieval Augmented Generation (RAG) is a technique that enhances large language models (LLMs) by incorporating external knowledge sources. It allows LLMs to reference authoritative knowledge bases or internal repositories before generating responses, producing output tailored to specific domains or contexts while providing relevance, accuracy, and efficiency. RAG achieves this enhancement without retraining the model, making it a cost-effective solution for improving LLM performance across various applications. The following diagram illustrates the main steps in a RAG system. Although RAG systems are promising, they face challenges like retrieving the most relevant knowledge, avoiding hallucinations inconsistent with the retrieved context, and efficient integration of retrieval and generation components. In addition, RAG architecture can lead to potential issues like retrieval collapse, where the retrieval component learns to retrieve the same documents regardless of the input. A similar problem occurs for some tasks like open-domain question answering—there are often multiple valid answers available in the training data, therefore the LLM could choose to generate an answer from its training data. Another challenge is the need for an effective mechanism to handle cases where no useful information can be retrieved for a given input. Current research aims to improve these aspects for more reliable and capable knowledge-grounded generation. Given these challenges faced by RAG systems, monitoring and evaluating generative artificial intelligence (AI) applications powered by RAG is essential. Moreover, tracking and analyzing the performance of RAG-based applications is crucial, because it helps assess their effectiveness and reliability when deployed in real-world scenarios. By evaluating RAG applications, you can understand how well the models are using and integrating external knowledge into their responses, how accurately they can retrieve relevant information, and how coherent the generated outputs are. Additionally, evaluation can identify potential biases, hallucinations, inconsistencies, or factual errors that may arise from...
Read More
Artificial Intelligence

Connect to Amazon services using AWS PrivateLink in Amazon SageMaker

AWS Machine Learning Blog AWS customers that implement secure development environments often have to restrict outbound and inbound internet traffic. This becomes increasingly important with artificial intelligence (AI) development because of the data assets that need to be protected. Transmitting data across the internet is not secure enough for highly sensitive data. Therefore, accessing AWS services without leaving the AWS network can be a secure workflow. One of the ways you can secure AI development is by creating Amazon SageMaker instances within a virtual private cloud (VPC) with direct internet access disabled. This isolates the instance from the internet and makes API calls to other AWS services not possible. This presents a challenge for developers that are building architectures for production in which many AWS services need to function together. In this post, we present a solution for configuring SageMaker notebook instances to connect to Amazon Bedrock and other AWS services with the use of AWS PrivateLink and Amazon Elastic Compute Cloud (Amazon EC2) security groups. Solution overview The following example architecture shows a SageMaker instance connecting to various services. The SageMaker instance is isolated from the internet but is still able to access AWS services through PrivateLink. One will notice that the connection to Amazon S3 is through a Gateway VPC endpoint. You can learn more about Gateway VPC endpoints here. In the following sections, we show how to configure this on the AWS Management Console. Create security groups for outbound and inbound endpoint access First, you have to create the security groups that will be attached to the VPC endpoints and the SageMaker instance. You create the security groups before creating a SageMaker instance because after the instance has been created, the security group configuration can’t be changed. You create two groups, one for outbound and another for...
Read More
Covid-19

Economist suggests storing grain to prepare for next global emergency

Coronavirus | The Guardian Isabella Weber, who linked corporate profits to inflation, shares how to prevent food shortages – and price gougingTell us: how has inflation changed the way you grocery shop?Isabella Weber, the economist who ignited controversy with a bold proposal to implement strategic price controls at the peak of inflation and identified corporate profits as a driver of high prices, has proposed a new measure that could prevent food shortages and price gouging in the wake of another disruption of the global supply chains.Weber’s new paper, published on Thursday, looks at how grain prices spiked in 2022 as Covid snagged supply chains and Russia invaded Ukraine. The price hikes helped to drive record profits for corporations while pushing inflation higher and increasing global hunger. In the paper, Weber and colleagues call for the creation of buffer stocks of grain that could be released during shortages or emergencies to ease price pressures. Continue reading... Go to Source 20/06/2024 - 12:13 /Tom Perkins Twitter: @hoffeldtcom
Read More
Business News

New Zealand exits recession but economy remains weak amid steep borrowing costs

The Straits Times Business News Strong population growth has masked how weak its economy has been, said an economist. Go to Source 20/06/2024 - 03:29 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Maximize your Amazon Translate architecture using strategic caching layers

AWS Machine Learning Blog Amazon Translate is a neural machine translation service that delivers fast, high quality, affordable, and customizable language translation. Amazon Translate supports 75 languages and 5,550 language pairs. For the latest list, see the Amazon Translate Developer Guide. A key benefit of Amazon Translate is its speed and scalability. It can translate a large body of content or text passages in batch mode or translate content in real-time through API calls. This helps enterprises get fast and accurate translations across massive volumes of content including product listings, support articles, marketing collateral, and technical documentation. When content sets have phrases or sentences that are often repeated, you can optimize cost by implementing a write-through caching layer. For example, product descriptions for items contain many recurring terms and specifications. This is where implementing a translation cache can significantly reduce costs. The caching layer stores source content and its translated text. Then, when the same source content needs to be translated again, the cached translation is simply reused instead of paying for a brand-new translation. In this post, we explain how setting up a cache for frequently accessed translations can benefit organizations that need scalable, multi-language translation across large volumes of content. You’ll learn how to build a simple caching mechanism for Amazon Translate to accelerate turnaround times. Solution overview The caching solution uses Amazon DynamoDB to store translations from Amazon Translate. DynamoDB functions as the cache layer. When a translation is required, the application code first checks the cache—the DynamoDB table—to see if the translation is already cached. If a cache hit occurs, the stored translation is read from DynamoDB with no need to call Amazon Translate again. If the translation isn’t cached in DynamoDB (a cache miss), then the Amazon Translate API will be called to perform the...
Read More
Covid-19

Covid immune response study could explain why some escape infection

Coronavirus | The Guardian Subjects who kept virus at bay showed rapid response in nasal immune cells and more activity in early-alert geneScientists have discovered differences in the immune response that could explain why some people seem to reliably escape Covid infection.The study, in which healthy adults were intentionally given a small nasal dose of Covid virus, suggested that specialised immune cells in the nose could see off the virus at the earliest stage before full infection takes hold. Those who did not succumb to infection also had high levels of activity in a gene that is thought to help flag the presence of viruses to the immune system. Continue reading... Go to Source 19/06/2024 - 18:23 /Hannah Devlin Science correspondent Twitter: @hoffeldtcom
Read More
Covid-19

Washington Post publisher alleged to have advised Boris Johnson to ‘clean up’ phone during Partygate Covid scandal

Coronavirus | The Guardian Sources’ claim suggests advice by Will Lewis, an informal adviser to then prime minister, contradicted instructions to staffWill Lewis, the Washington Post publisher, advised Boris Johnson and senior officials at 10 Downing Street to “clean up” their phones in the midst of a Covid-era political scandal, according to claims by three people familiar with the operations inside No 10 at the time.The advice is alleged to have been given in December 2021 and January 2022 as top officials were under scrutiny for potential violations of pandemic restrictions, which was known as “Partygate”. Continue reading... Go to Source 19/06/2024 - 18:23 /Anna Isaac in London and Stephanie Kirchgaessner in Washington Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Deploy a Slack gateway for Amazon Bedrock

AWS Machine Learning Blog In today’s fast-paced digital world, streamlining workflows and boosting productivity are paramount. That’s why we’re thrilled to share an exciting integration that will take your team’s collaboration to new heights. Get ready to unlock the power of generative artificial intelligence (AI) and bring it directly into your Slack workspace. Imagine the possibilities: Quick and efficient brainstorming sessions, real-time ideation, and even drafting documents or code snippets—all powered by the latest advancements in AI. Say goodbye to context switching and hello to a streamlined, collaborative experience that will supercharge your team’s productivity. Whether you’re leading a dynamic team, working on complex projects, or simply looking to enhance your Slack experience, this integration is a game-changer. In this post, we show you how to unlock new levels of efficiency and creativity by bringing the power of generative AI directly into your Slack workspace using Amazon Bedrock. Solution overview Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. In the following sections, we guide you through the process of setting up a Slack integration for Amazon Bedrock. We show how to create a Slack application, configure the necessary permissions, and deploy the required resources using AWS CloudFormation. The following diagram illustrates the solution architecture. The workflow consists of the following steps: The user communicates with the Slack application. The Slack application sends the event to Amazon API Gateway, which is used in the event subscription. API Gateway forwards the event to an AWS Lambda function. The Lambda function invokes Amazon Bedrock with the request, then responds...
Read More
Business News

Bullish outlook on economic growth in Cambodia spurs FDI from S’pore companies

The Straits Times Business News Cambodia-Singapore Business Forum told green energy, healthcare and agri-food among promising sectors. Go to Source 19/06/2024 - 15:08 / Twitter: @hoffeldtcom
Read More
Business News

Bank of England set to hold interest rates despite inflation hitting 2% target

US Top News and Analysis Services inflation and wage growth both remain significantly above the level desired by monetary policymakets. Go to Source 19/06/2024 - 12:13 / Twitter: @hoffeldtcom
Read More
Business News

Singapore’s growth momentum is challenged by weak global outlook, says report

The Straits Times Business News The country’s economy is set to expand by 2 per cent in 2024, said Oxford Economics. Go to Source 19/06/2024 - 09:19 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

MIT-Takeda Program wraps up with 16 publications, a patent, and nearly two dozen projects completed

MIT News - Artificial intelligence When the Takeda Pharmaceutical Co. and the MIT School of Engineering launched their collaboration focused on artificial intelligence in health care and drug development in February 2020, society was on the cusp of a globe-altering pandemic and AI was far from the buzzword it is today.As the program concludes, the world looks very different. AI has become a transformative technology across industries including health care and pharmaceuticals, while the pandemic has altered the way many businesses approach health care and changed how they develop and sell medicines.For both MIT and Takeda, the program has been a game-changer.When it launched, the collaborators hoped the program would help solve tangible, real-world problems. By its end, the program has yielded a catalog of new research papers, discoveries, and lessons learned, including a patent for a system that could improve the manufacturing of small-molecule medicines.Ultimately, the program allowed both entities to create a foundation for a world where AI and machine learning play a pivotal role in medicine, leveraging Takeda’s expertise in biopharmaceuticals and the MIT researchers’ deep understanding of AI and machine learning.“The MIT-Takeda Program has been tremendously impactful and is a shining example of what can be accomplished when experts in industry and academia work together to develop solutions,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In addition to resulting in research that has advanced how we use AI and machine learning in health care, the program has opened up new opportunities for MIT faculty and students through fellowships, funding, and networking.”What made the program unique was that it was centered around several concrete challenges spanning drug development that Takeda needed help addressing. MIT faculty had the opportunity to select the projects based on...
Read More
Artificial Intelligence

Improving air quality with generative AI

AWS Machine Learning Blog As of this writing, Ghana ranks as the 27th most polluted country in the world, facing significant challenges due to air pollution. Recognizing the crucial role of air quality monitoring, many African countries, including Ghana, are adopting low-cost air quality sensors. The Sensor Evaluation and Training Centre for West Africa (Afri-SET), aims to use technology to address these challenges. Afri-SET engages with air quality sensor manufacturers, providing crucial evaluations tailored to the African context. Through evaluations of sensors and informed decision-making support, Afri-SET empowers governments and civil society for effective air quality management. On December 6th-8th 2023, the non-profit organization, Tech to the Rescue, in collaboration with AWS, organized the world’s largest Air Quality Hackathon – aimed at tackling one of the world’s most pressing health and environmental challenges, air pollution. More than 170 tech teams used the latest cloud, machine learning and artificial intelligence technologies to build 33 solutions. The solution addressed in this blog solves Afri-SET’s challenge and was ranked as the top 3 winning solutions. This post presents a solution that uses a generative artificial intelligence (AI) to standardize air quality data from low-cost sensors in Africa, specifically addressing the air quality data integration problem of low-cost sensors. The solution harnesses the capabilities of generative AI, specifically Large Language Models (LLMs), to address the challenges posed by diverse sensor data and automatically generate Python functions based on various data formats. The fundamental objective is to build a manufacturer-agnostic database, leveraging generative AI’s ability to standardize sensor outputs, synchronize data, and facilitate precise corrections. Current challenges Afri-SET currently merges data from numerous sources, employing a bespoke approach for each of the sensor manufacturers. This manual synchronization process, hindered by disparate data formats, is resource-intensive, limiting the potential for widespread data orchestration. The platform, although...
Read More
Artificial Intelligence

Use zero-shot large language models on Amazon Bedrock for custom named entity recognition

AWS Machine Learning Blog Name entity recognition (NER) is the process of extracting information of interest, called entities, from structured or unstructured text. Manually identifying all mentions of specific types of information in documents is extremely time-consuming and labor-intensive. Some examples include extracting players and positions in an NFL game summary, products mentioned in an AWS keynote transcript, or key names from an article on a favorite tech company. This process must be repeated for every new document and entity type, making it impractical for processing large volumes of documents at scale. With more access to vast amounts of reports, books, articles, journals, and research papers than ever before, swiftly identifying desired information in large bodies of text is becoming invaluable. Traditional neural network models like RNNs and LSTMs and more modern transformer-based models like BERT for NER require costly fine-tuning on labeled data for every custom entity type. This makes adopting and scaling these approaches burdensome for many applications. However, new capabilities of large language models (LLMs) enable high-accuracy NER across diverse entity types without the need for entity-specific fine-tuning. By using the model’s broad linguistic understanding, you can perform NER on the fly for any specified entity type. This capability is called zero-shot NER and enables the rapid deployment of NER across documents and many other use cases. This ability to extract specified entity mentions without costly tuning unlocks scalable entity extraction and downstream document understanding. In this post, we cover the end-to-end process of using LLMs on Amazon Bedrock for the NER use case. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set...
Read More
1 2 3 4 5 6 42

The messages, the text and the photo is belonging to the one who sends out the RSS feed or related to the sender.

error: Content is protected !!