New ArrivalsHealth & WellnessValentine’s DayClothing, Shoes & AccessoriesHomeKitchen & DiningGroceryHousehold EssentialsFurnitureOutdoor Living & GardenBabyToysVideo GamesElectronicsMovies, Music & BooksBeautyPersonal CareGift IdeasParty SuppliesCharacter ShopSports & OutdoorsBackpacks & LuggageSchool & Office SuppliesPetsUlta Beauty at TargetTarget OpticalGift CardsBullseye’s PlaygroundDealsClearanceTarget New Arrivals Target Finds #TargetStyleStore EventsAsian-Owned Brands at TargetBlack-Owned or Founded Brands at TargetLatino-Owned Brands at TargetWomen-Owned Brands at TargetLGBTQIA+ ShopTop DealsTarget Circle DealsWeekly AdShop Order PickupShop Same Day DeliveryRegistryRedCardTarget CircleFind Stores
Modern Data Mining Algorithms in C++ and Cuda C - by  Timothy Masters (Paperback) - 1 of 1

Modern Data Mining Algorithms in C++ and Cuda C - by Timothy Masters Paperback

$69.99

In Stock

Eligible for registries and wish lists

Sponsored

About this item

Highlights

  • Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables.
  • About the Author: Timothy Masters has a PhD in statistics and is an experienced programmer.
  • 228 Pages
  • Computers + Internet, Databases

Description



Book Synopsis



Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables.

As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:

  • Forward selection component analysis
  • Local feature selection
  • Linking features and a target with a hidden Markov model
  • Improvements on traditional stepwise selection
  • Nominal-to-ordinal conversion

All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.

The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.

What You Will Learn

  • Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.
  • Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.
  • Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.
  • Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.

Who This Book Is For

Intermediate to advanced data science programmers and analysts.



From the Back Cover



As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:

    Forward selection component analysis
  • Local feature selection
  • Linking features and a target with a hidden Markov model
  • Improvements on traditional stepwise selection
  • Nominal-to-ordinal conversion
All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.

The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.

You will:

  • Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.
  • Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.
  • Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as predictionof financial markets.
  • Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck.
  • Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.


  • Review Quotes




    "This is an excellent book directed toward those who are already working in data mining." (Anthony J. Duben, Computing Reviews, May 5, 2021)




    About the Author



    Timothy Masters has a PhD in statistics and is an experienced programmer. His dissertation was in image analysis. His career moved in the direction of signal processing, and for the last 25 years he's been involved in the development of automated trading systems in various financial markets.

    Dimensions (Overall): 10.0 Inches (H) x 7.0 Inches (W) x .51 Inches (D)
    Weight: .93 Pounds
    Suggested Age: 22 Years and Up
    Number of Pages: 228
    Genre: Computers + Internet
    Sub-Genre: Databases
    Publisher: Apress
    Theme: Data Mining
    Format: Paperback
    Author: Timothy Masters
    Language: English
    Street Date: June 6, 2020
    TCIN: 1008783563
    UPC: 9781484259870
    Item Number (DPCI): 247-26-0897
    Origin: Made in the USA or Imported
    If the item details aren’t accurate or complete, we want to know about it.

    Shipping details

    Estimated ship dimensions: 0.51 inches length x 7 inches width x 10 inches height
    Estimated ship weight: 0.93 pounds
    We regret that this item cannot be shipped to PO Boxes.
    This item cannot be shipped to the following locations: American Samoa (see also separate entry under AS), Guam (see also separate entry under GU), Northern Mariana Islands, Puerto Rico (see also separate entry under PR), United States Minor Outlying Islands, Virgin Islands, U.S., APO/FPO

    Return details

    This item can be returned to any Target store or Target.com.
    This item must be returned within 90 days of the date it was purchased in store, shipped, delivered by a Shipt shopper, or made ready for pickup.
    See the return policy for complete information.

    Related Categories

    Get top deals, latest trends, and more.

    Privacy policy

    Footer

    About Us

    About TargetCareersNews & BlogTarget BrandsBullseye ShopSustainability & GovernancePress CenterAdvertise with UsInvestorsAffiliates & PartnersSuppliersTargetPlus

    Help

    Target HelpReturnsTrack OrdersRecallsContact UsFeedbackAccessibilitySecurity & FraudTeam Member ServicesLegal & Privacy

    Stores

    Find a StoreClinicPharmacyTarget OpticalMore In-Store Services

    Services

    Target Circle™Target Circle™ CardTarget Circle 360™Target AppRegistrySame Day DeliveryOrder PickupDrive UpFree 2-Day ShippingShipping & DeliveryMore Services
    PinterestFacebookInstagramXYoutubeTiktokTermsCA Supply ChainPrivacy PolicyCA Privacy RightsYour Privacy ChoicesInterest Based AdsHealth Privacy Policy